doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/18983 (DOI)
So gasses. So we're going to kind of go through a lot of things in this one. We're going to start sort of conceptually and then walk into the calculations. But by the end of this quarter, the thing that we need to be able to do is we need to know very both conceptually, you guys have figured out I like those questions, along with calculating. How volume, pressure, and temperature are all related to each other. You're going to know a lot of this already, especially once I pointed out to you, which I realize is a little bit of an oxymoron. But you'll realize that you know a lot of it already, at least conceptually. And we'll actually put that into equation form probably today, if not today, next class. So we're going to learn about how the ideal gas law, that you kind of have an idea about how things work in real life. All that's going to be related to if you have one ideal gas or if you have a bunch of different mixtures of ideal gases, and how you can kind of calculate really complicated problems with lots of different gases all together, just by breaking them down into a whole bunch of simple problems. Then what we're going to do is after we, we're starting with sort of bulk properties, we're starting with these big, you know, liters, two liters of gases, and we're just talking about them in terms of pressure, volume, temperature, moles, things of that sort. Once we learn that, we're going to go a little bit deeper, and we're going to actually look at the kinetics of it and see really qualitatively. Kinetics really isn't until 1c. But we'll get an idea of how the actual way that the atoms are moving around is giving you those bulk properties, the pressure and the volume, and why things that the atoms are doing are making those properties be like they are. And that's called kinetic molecular theory. So we'll get into that near the end. And then, you know, as standard in chemistry, we're going to tell you a bunch of things that almost work, and then we're going to tell you why we lied to you a little bit. You're getting the pattern there, I think. So we're pretty much going to spend 95% of the chapter on what we call ideal gases that we'll get into. But real gases aren't going to be ideal. And so we'll show you one of the ways that you can sort of make up for this and fix issues with not being quite ideal. And then there's lots of actual ways to do this, but we're just going to pick the one that's used the most, and we're going to walk through that. So that'll be sort of the last 5% of the chapter. OK. First, what is a gas? So, you know, kind of going back, we talk about solids, we talk about liquids, we talk about gases, right? Those are sort of our three main phases that we talk about. The idea with gases that's different from all the rest is that they're moving freely, right? They aren't necessarily near each other. And the pressure, they may be farther or closer to each other. But in general, they're just moving freely. It's not like a liquid where they're all sitting on top of each other just rotating around. Or like a solid, which you can think of as like a tight lattice. Like we filled up all the chairs in here. And you know, you guys would be sort of a solid lattice. Yeah. Sorry, I'm going to be able to ask you a question. Do you guys know if the bulk of the final is kind of on chapter 4 or...? Oh, it'll be... Yeah, actually, I meant to say that. So she asked the final for the sake of studying, so you know. It'll be 25% midterm 1, 25% midterm 2, and 50% chapter 4. Okay. Sounded like you guys were surprised. I'm not sure why, but I guess it was a good thing to talk about. Okay. So unlike liquids and solids, they aren't going to be touching each other most of the time. When they are, it's going to be sort of a quick thing, right? They're going to bounce off each other and move their own way. They're not really going to be interacting with each other at all. Or at least not much. Now this gives gasses and properties that liquids and solids don't have. If you try to take a solid, like a real solid that doesn't have a bunch of air pockets in it, and you try to compress it, you're not really going to be able to do that. Take like a solid piece of metal and try to compress it. You're not going to be able to do it. You can say, well, I can compress something like wood, but what are you really compressing? Air pockets and cell pockets and things like that, that you're not actually compressing the solid. So anything that's a solid, you're not going to actually be able to compress. Same thing with liquids. If you try to take water and you try to compress it down, you might be able to get a little tiny bit if you have really good measurements, but nothing perceptible. Now can you take a gas and compress it? Sure. That's what all these sorts of containers always are, right? They just take a ton and ton of gas and they put it all into a little container so that then you can fill up balloons all day at a party with just one little container. Now they also don't have a defined shape, which of course both liquids and liquids and gases and if you believe the internet cats all have. So some of you have seen that meme. So they fill the shape of whatever you put them in. If I put them in some odd shape container, they're just going to fill it up, whatever it is. It doesn't matter. A solid doesn't do that, of course, right? You put a solid into a container, it's just going to sit there in the same shape it always was. They also are going to expand. Now this is different than a liquid, right? If you put a liquid in a container this big and then you give it a container this big, the only thing it's going to do is flatten itself out. It's not going to get bigger where a gas is. A gas is going to expand to fill whatever space you give it, it's going to fill. They're going to mix evenly amongst each other. So sometimes liquids will do this, sometimes they won't. It depends. You'll learn about that in, I think, 1b. But in this case gases are going to mix. They're lower density. So that sort of makes sense, right? Like they weigh less for a given value because they're not as closely packed. They're not all on top of each other. So those are sort of your general guidelines for what a gas are. That's how we define a gas. I'll give you a few more minutes to write down. All of these are things that I think on some level you probably know, but now they're sort of written out and defined so I could ask you, you know, how do you know if something is a gas? And you could say, well, I can compress it. It takes the shape of its surroundings. It'll mix evenly with each other. Things of that sort. OK. So now a little bit of a word on units because it's one of those things of life. I'm not going to go through unit conversion with you. I don't think you would have made it past the first midterm if you didn't know how to do unit conversion, but I do want to talk about the units. So we have something called a Pascal, which is a Newton meter. So the gases really, really don't have a unit that pretty much is always used. It's really sort of scattered is how people report it. So make sure you do know the other units. Of course, you'll have the same equation sheet for your final that you had for your midterm and midterm too, so you'll know that you have these given to you. You'll also see kilopascal in a lot just because kilopascal gets you closer to an ATM. What is approximate atmospheric pressure? About one, right? So that's where this comes in. But a lot of times you see it measured in millimeters of mercury. Why millimeters of mercury? I'll show you in a minute. Actually, I'll show you right now. So I'll show you the parameters. I hear lots of people whispering, go back. I'll go back a sec. So just know how to convert back and forth between these. You're going to do it a lot, and you want to be able to do it really quickly just because it's given you, when we do the ideal gas law, there's going to be a lot of times that you have to do it in atmospheres. There's going to be a lot of times that you don't, and you want to watch out for those because you don't want to waste a lot of time converting units when you don't have to. But you want to be careful to make sure you always convert them when you do. And when we get to those sorts of problems, I'll go over it in more detail. OK. So now why the millimeters of mercury thing came up out? Something called the barometer, in which you've probably seen somewhere in an antique store or something. But the idea behind barometers is they give you a measure of atmospheric pressure. So you can't really see what's happening in something like this. This is like an old, you know, something that you can make in a lab style barometer. So what you can do with this is you take and you put an empty tube in a thing of mercury, and then you put a little bit of room around the mercury so that the atmosphere can push down on it. OK. So the atmosphere is going to be pushing down on this disc of, or not disc, but liquid vat of mercury. When it does that, what happens if you push on a liquid and you have an empty tube in there? It goes up, right? So if it pushes on this, it's going to push the mercury up into the tube. That's how a barometer actually works. So you have a vacuum tube, you have a disc of mercury, and then depending on how much atmospheric pressure we have, because it changes day to day, right? It changes with storms, it changes with a variety of different things. So this will give you a measure of that. Now because this was how atmospheric pressure was originally measured, it ended up being measured in millimeters of mercury, because you could just measure, you could just measure how much mercury was going up into the tube. And then you could sit in with your ruler and say, OK, well, the atmospheric pressure today is about 764 millimeters of mercury. So that's how the millimeters of mercury unit came about. Now maybe a good question at this point becomes, OK, mercury, why did we use mercury? What's special about mercury? It's a liquid, it's a liquid, right? And why maybe not water? What else is special about mercury? It's really heavy, or so I guess we should say dense, right? A small amount of mercury is going to weigh a lot more than a small amount of water. So this height is going to be dependent on how heavy it is, right? Because all of this right here, that's being pulled down too, right? It's being pushed up by the atmospheric pressure pushing into the tube. What is it being pulled down by? I guess maybe pulled is the better term here. Gravity. So the heavier it is, the more dense that it is, the more dense that this liquid is, the smaller it's going to be. Now why would we want something to be small? Well, let's talk about it. Let's do an example. OK, so suppose we were marooned on a tropical island, and we wanted to know the atmospheric pressure, because it's really important information to know when you're marooned on a tropical island. It could be storms, right? You need to know if the storm is coming. OK, so how do we do this? Well, you have to know first that the density and the height are going to be proportional to each other. So we can set up this. Now this looks pretty similar to like an M1v1 equals M2v2 problem, right? It's the same sort of idea. It's a ratio. Now we have the density of seawater, because that's theoretically what you have around. We have the height that it would reach in a mercury thermometer. So let's try to figure this out. Well, let's set this up to be seawater. And this up to be what a mercury barometer would be. And fill in all our numbers and see what happens. So our density of water, seawater here is 1.0, or excuse me, 1.10. And we don't know what our height of seawater is, so we'll just leave that as h, our height of seawater. And then we know what our density of mercury is, so we'll fill that in. And we know our mercury barometer reach. We don't have to worry too much about our units here for the same reason. We don't have to worry about it in M1v1 equals M2v2, or if you haven't seen that recently, sometimes they do C1v1 equals C2v2, because it's a ratio. The units are all cancelled out. So we're left with the height of the seawater. And we end up with 908 centimeters. So that's pretty big, right? So why wouldn't we want to use a water barometer in our houses? Because if you've likely, if you've, I'll go back in a sec. Likely if you've seen these anywhere, it was probably at like maybe your grandma's house or something, where they used to, it would be next to a thermometer, things of that. You want to know the thermometer, the atmospheric pressure. So why not a water? Why wouldn't we want to use water for it? Yeah, who wants to have a thermometer that's like that big sitting around, right? So it's a density height thing. We can use mercury because it's so dense that it doesn't take up a lot of height. And so that was why it was picked. Why don't we tend to have them sitting around our house now? Turns out mercury is kind of toxic. So same reason that we don't really have mercury thermometer anymore. Okay. All right. So now we're moving on to a little bit of definition things that we'll need to know for the chapter. What is an ideal gas? So we talked about what a gas is, but what is an ideal gas? This is sort of a definition that we've made up in order to be able to do a bunch of calculations and is a relatively good approximation for some of the gases. Sometimes it breaks down and we'll talk about those examples, but for a lot of the gases, the ideal gas or calling it an ideal gas or approximating it using the ideal gas is pretty close. So if you have an ideal gas or something that acts like an ideal gas, the molecules are going to move completely randomly. So that means they don't interact. So if you have two molecules and they come close to each other, if they're ideal gases or they're acting like ideal gases, they're not going to have much interaction with each other. They're just going to kind of whip by each other and keep going. They have no volume. So now if you think about an atom, we can agree that the atom has volume, right? Well, for the sake of ideal gases, we're going to pretend that's not a thing, that they don't have volume. And of course, this is sometimes more true than others, right? Something like helium is going to be a lot smaller than something like nitrogen. So you would expect helium to be a bit more ideal than nitrogen, but it's a good approximation. All collisions are elastic. What does elastic collision mean? It means they don't lose any energy. So the sort of technical term here is that they don't, there's a complete conservation of energy. So if two things bounce into each other with a certain amount of energy, they're going to fly off with the exact same amount of energy. Now maybe one that was going slower is now going to move faster, or one that was moving faster is now going to move slower. You can kind of think of it like billiard balls, only of course with pool balls, it eventually stops because of friction. And it's not completely elastic, but it's a good kind of view, visual, if it helps. So a lot of your gases are going to be able to be treated as ideal, and then a lot of the gases are only going to be able to be treated as ideal in certain situations. So it's going to work really well at low pressures, and it's going to work really well at high temperatures. So I'll think about why that is, because this is hard to remember if you don't think about why it is, and I'd rather you understand it than memorize it. Low pressures. So at low pressures, is the space between two molecules going to be high or really small? So at low pressure, it's going to be really high, right? At low pressure, you're giving them lots and lots of room to move around. So let's say we take out all the seats here. We don't want you guys to be a solid anymore. We take out all the seats, and I have all of you in here. That's pretty high pressure, right? Well, we'll pretend some of you guys are in competing sororities and fraternities too. There's a lot of pressure. Now I take three quarters of you, and I say, OK, go away. We've lowered the pressure now, right? So is there going to be more room between those people or less room? More room. So are they going to interact as much? No, they're not going to interact as much, because they're going to be more spaced out. We'll give them blindfolds and tell them to wander aimlessly. So they're not going to interact. And so because of that, they're going to be more ideal. They're going to be more like an ideal gas. Now if instead we just put tons and tons and tons of people in here, and our pressure goes up, that's like mimicking our pressure going up, now they're going to interact a lot. Well that means that they're not acting like an ideal gas, OK? So this one has to do with how much you're going to be interacting with each other. High temperatures? We may have to leave that one a little bit toward the end, but I'm going to go ahead and explain it here anyways. So high temperatures, does anyone remember, know what molecules do at high temperatures? Are they faster or slower? Faster. You do all remember, yay. OK, faster. So if now they're really, really fast and they're bumping into things, their collisions are, let's actually reverse that, at low temperatures. So at low temperatures they're going to be doing what? Going really, really slow. Now let's say that some of these people are friends and they're talking to each other. If I tell them they all have to run around really, really fast, or they have to run around really, really slow, where are they going to be able to talk to their friends more? They're going slow, right? If two people are just kind of walking by each other, OK, well they can talk, they can talk, they can talk. If they're racing by each other, they're not going to be talking much. So this has to do with how much you can interact to. If they're going slower, if there's an attraction between two molecules, they're going to be able to interact a little bit more. So all of this is how do we minimize interactions? OK. So let's do some talking about what's going to happen if we change different components of a system. So there's, I think I've shown you one of these before. There's a great website, and there's a lot of these on here. If you're someone who really likes learning by like playing around with things, go visit this website and just play. It's fun. So I've shown you this, I think, with the photoelectric effect. I think it was the last time I showed this to you. So let's say we have a container, and we have lots of different things we can change. We can change temperature. We can change the amount of gas we have. We can change volume. We can change all sorts of things. We can change even the size of the species. So let's say we put some gas in here. That was maybe a little more than I wanted. Luckily we can let it go, too. OK. So we have some gas in here. Now let's kind of base this all on what's going to happen to the pressure if we change different things. So let's say we make the volume smaller. What do you think is going to happen to the pressure? It's going to get bigger. And we can see this here. Definitely gets bigger, right? Now what do you think, so let's move this back out. What do you think would happen based on what we just talked about on the last slide? If we increase temperature, what happens to the speed of molecules? It's faster. And so what do you think that's going to do to the pressure? It's going to go up, right? They're going to all have a little bit more energy. So we can do this. We can raise the temperature. We watch the thermometer climb. We can watch the pressure climb. OK. I'll stop it now or it'll burst. Put some ice on it. OK. So if we decrease volume, what happened to our pressure? It went up, right? Because we were squishing the molecules together. We increased our temperature. What happened to our pressure? It went up because now they're moving around faster. Now what happens if we just add more molecules? What's going to happen to our pressure? It's going to go up, too. And it may burst. OK. So those are sort of the things we're going to go through and talk about in a little bit more detail now. But most of you were able to pretty much guess what was going to happen before it happened, right? So that's good. You already know some of this stuff. So when we put it into equation form, that's really all we're doing. We're just taking what you already kind of know. And we're putting it into equation form. Go and play around with this a little bit though. It is kind of fun. OK. Now, all of these that I'm going to be talking about have names associated with them. Perhaps I should care more that you can associate the name with the equation, but I don't. I want you to know the equation. I want you to be able to use it. I want you to be able to make graphs of them, and I want you to be able to explain them. In four years from now, if I see you walking down the street and I say, hey, Boyle's law, go, I want you to be able to say it has something to do with gases. And then give me a lecture on gases. OK. Maybe the last one was a little optimistic. But hey, I want you to know that I have something to do with gases. I should be able to ask you to list the different gas laws, and you can list the names. And if you can't associate them, I don't necessarily care too much. OK. So the first one we're going to talk about is Boyle's law. Now you're also going to notice, you know, normally I interspersed examples. I'm going to wait to do examples until the end, because you can do all of these using the very end combined law. And so I think that's almost a little bit better to teach you than using individual laws. So I'm going to hold all the examples until the end of this section. OK. So Boyle's law. So pressure and volume. So this was equivalent to me squishing the box a little bit. If you increase the volume, pressure decreases. So we sort of did this in the opposite manner, right? We had this box with all the molecules floating around, and then we took and we made the box smaller. And when we made the box smaller, what happened to the pressure? It increased, because you were squishing all the molecules together, right? It's like now I put everybody free wandering around with blindfolds and your plugs and their ears, and you're wandering around in the room, and now I start bringing the walls in, right? That's going to increase the pressure. Everyone's going to be a little bit more squished. They're going to be running into the walls more. That's increasing pressure. So this just states the reverse. If you increase the volume, now the walls would be expanding out, and everyone can move around a little bit more. Everyone's hitting the walls a little less. That's decreasing the pressure. Now this is true when you have gas and temperature held constant. For all of these laws, in order to sort of define, like say, when you do this, this happens, we have to hold everything else constant. Otherwise, whatever's changing there could change some things. So in this case, this is true if you hold moles of gas and temperature constant. So now with all of these, we want to see what the graphs look like, and we want to see sort of what would happen as we decrease volume, what would happen to pressure. So we'll look at that in a minute. So here we have pressure versus volume, and pressure versus one over volume. So you have this sort of decay here, this one over x shape, if you remember back to your algebra two days where you had to recognize shapes of graphs. And this is because as your volume increases, your pressure decreases. Now it's a little bit easier to see if we actually graph this in a linear fashion. We say, OK, well we have pressure here, we have volume here, or one over volume here, and we can make a linear graph this way. From an experimental standpoint, a lab standpoint, this is a much easier graph to work with. Why? Well we can come up with the equation for this line, and then we could say, well for this system at any given volume, what's the pressure? And you could just find it. So these are the sorts of graphs I want you to know how to recognize, see, sketch, things of that sort. OK. So our next one now that we're going to talk about is relating volume and temperature. So this comes into play in lots of different places where you have cold weather actually. So occasionally you'll see this also called Gay-Lusex law. You can kind of interchange the two. So this one says as volume increases, temperature increases. And as volume decreases, temperature decreases. So you can see this one fairly easily in real life. Does anyone have a stuck-up balloon in the freezer? OK, maybe that's not the most normal thing to do as a kid, but you know, I like chemistry. We don't have time to deal with freezers here today though. So we're just going to use liquid nitrogen. So I don't think I ever brought liquid nitrogen to this class. So what liquid nitrogen is is exactly what it says. But liquid nitrogen is very, very cold. And so you have to get it very, very cold in order to make it liquid. So now we've made our balloon cold and it shrunk. And now it's warming up, so it's getting bigger. It's also very, very cold still. And maybe overdid it. Oh no, there it goes. I thought I made it overdone the coldness, but. And if we let it sit for long enough, it'll go back to its original shape, assuming the rubber didn't do anything bad, but it's fine. And you can just do it over and over again, right? It's not like we lost molecules. This is my, you know, you saw me stick it in, you saw me pull it back out. I didn't, you know, magically switch balloons or anything like that. I can have someone come up and check to make sure, you know, I'm not doing a magic trick here. Mostly I have no motivation to lie to you, so. So this is how you can remember this one, right? You stick a balloon in the freezer, or you're impatient like I am and you stick it in liquid nitrogen. It shrinks. And I can only make it so small because this isn't a perfect system. And so the balloon, I'm worried about it breaking. And as it heats up, it gets bigger. And again, why is this? What's happening to the molecules as we heat them up? Yeah, the temperature, or the speed is increasing. So as you increase the temperature, you increase the speed and it makes more pressure, or more volume, excuse me. So we're assuming that this, this is a model to say it's basically constant pressure, because remember, we have to keep the other things constant, right? So what is about the general pressure of this? Atmospheric pressure, right? So a balloon works good for a constant pressure situation because you can say, well, okay, it's basically atmospheric pressure. Sure, the rubber is going to make a little bit of a difference, but for the most part, it's just kind of atmospheric. Okay. So places that this comes into play in sort of real life. It's not quite as big of an issue here because you don't have a lot of weather, but if you live someplace like Alaska and you fill your tires in the middle of winter when it's like negative 20 degrees, and then summertime comes, what happens to your tires? Yeah, the pressure, or the volume is going to try to expand. And again, this isn't, real life doesn't tend to be a great model for these things individually because of, it's hard to hold this constant, but eventually you'll increase your pressure so much that your volume can't increase and it'll blow. This is a little bit better way of modeling it. Has anyone around camping and slept on an air mattress? Yeah. Well, either the rest of you are hardcore campers or you should try it sometime. It's fun. I like air mattresses when I camp because, you know, the ground's hard. So what happens is if you fill it in the middle of the day, especially in the summer, you know, you fill it up, you're ready to go to bed, you go to bed, and then you wake up at 2 AM on the ground. And while this has happened because there's a hole in the air mattress, a lot of times it's really just because it got cold. So now you're cold and sleeping on the ground. That's when camping becomes not quite as much fun. So the lesson to be learned here is you set up your tent first and then you don't set up the air mattress until it's already started dropping in temperature. Okay. Also, the, you know, actual important lesson, that volume and tent picture are proportional. That's good to learn too. Okay. So this is that same picture that was up for the other one. You guys have a similar picture in your book. This one comes from the Chang book that we used in previous years. So this just kind of goes through and shows you the exact same idea. That if you're holding moles R, which we haven't really talked about yet and we'll just call that a constant, and then pressure constant, what's going to happen as you change the temperatures. And then the reverse of that too, which is pressure, which we sort of talked about with the tires, right? If you increase temperature, what do you think is going to happen to pressure? If the volume isn't changing much. That's going to increase too. So the pressure and volume are kind of closely related in this one in the sense that it's in real life examples, it's hard to actually hold them both constant or one of them completely constant. So keep this in mind when you're looking at things. If you increase the temperature, you're going to either increase the pressure or the volume, assuming everything else is held constant. So let's look at a graph of this. I heard it go back, so we'll look at this one more time. So as you raise temperature, you can increase your volume. Or if you're holding your volume constant, if you're refusing to let this move, you can increase your pressure. If you lower your temperature, you're decreasing your volume. Or if you're holding your volume constant and instead allowing the pressure to change, your pressure is going to decrease. Okay? It's very similar. Okay. So now let's look at a graph of this one. Now this would be the same if we put volume over here. It wouldn't, or excuse me, pressure over here. Whether we hold volume constant or whether we hold pressure constant, it turns out to be the same graph. And these are linear. So if you know how much for an ideal gas, you know how much you're increasing the temperature by, you know how much you're increasing the volume by. And vice versa. Okay. Now let's look at this one. This one's a little bit more complicated, right? So now we have four different lines. And what's the big difference here? Yeah, your pressures, right? So this is an ideal gas graph of Charles' law, but at different pressures. Because what do we know, we know pressure affects this, right? What does pressure do to volume? If you increase the pressure, it increases the volume too. So what happens here is that if you take this, it changes what's happening here. So this just shows you, for all different pressures, what's happening to your volume. Okay? So if you had four atmospheres, you would have it here, two here, one here, point five here. All right? So that's all that graph is going through. Okay. Next one. Avogadro's law. So what is Avogadro sort of known for? Autogradro's number, which has to do with what? Number of atoms, right? Number of atoms in a mole. So what do we think this one's going to have to do with? Moles. So let's say we have a balloon. Maybe. All right, we have a balloon. So we can make it bigger a couple different ways. You already talked about one, right? Changing the temperature. So I could put this in some place warmer. What's an easier way for the setup at the moment? Okay, it's bigger now. So what does that mean? What did I do to make it larger? Awesome. I increased the number of moles. I made it bigger by putting more molecules in it, right? So this one says that volume and temperature and, you know, similarly volume and pressure are going to be related to each other and that they're going to be proportional, right? The more moles I put in, the larger the balloon gets. And again, this kind of, you can kind of model this as a constant volume system. So this one relates moles and volume. As your number of moles increases, so does your volume. So if I want to make this smaller, what do I do? Now it's really small, but I could have just let it go a little bit. It kind of slipped. All right, so this is true when your pressure and your temperature is held constant, right? Because if I had changed my temperature a bit, then all sorts of other things are happening that we have to calculate in. So for all of these, we're talking about a couple of different, we're only relating two variables at once. If you bring in a third or a fourth variable, you can't necessarily say this. Sure if you, you know, increase the number of moles, that's going to add to making it larger. But what if I then make it really cold? What's going to happen? Well, it depends on the ratio, right? It depends on how much extra I put in versus how the cold is making it happen. Okay. So all of these then involve graphs, right? You have to be able to graph all of these in sort of a similar fashion to each other. Where this one would, I didn't graph this one, this one would just be linear again. It would look the same as the Charles log graph. So we have two people talking. And you know, I don't know if you've ever read XKCD. If you're in the sciences and math, you really probably should as just part of life. So we have two of them. You can, this guy and girl, I think we should give it another shot. He says, we should break up and I can prove it. She graphs this, our relationship, and it's obviously been going downhill. And he says, huh, maybe you're right. She says, I knew data would convince you. No, I just think I can do better than someone who doesn't label our axes. It was a kind of classic XKCD humor. Don't forget to label your axes, right? If I tell you to graph something, if I tell you to gas graph pressure versus volume, and you don't put pressure and volume. Maybe I thought you had it flip flopped. Maybe I thought temperature was down here and pressure was up here. So make sure when, if I ask you to draw these graphs, you label your axes properly, okay? You need to make sure that you put temperature and pressure or temperature and volume or whatever it is that I'm asking for. Otherwise, I won't know that you actually knew to put one of them on the bottom and one of them on the top or whichever. Okay. So now if we come back here a minute, now we could actually go through and calculate all this if we wanted. We could actually go through and say change the temperature by a set amount and calculate the new pressure. We could say, okay, well, we want this to be exactly 600 and see what would happen. And we could say, okay, well, we know how many molecules are in here and we could put exactly double in and we could calculate the difference. We could do all three, right? We haven't really talked about how to do that in detail, but we could. We could say that, okay, well, I'm going to increase the pressure by adding more gases, but then let's say I want to offset that. How would I offset that using temperature? Make it colder. Make it colder. So I could remove heat by doing this. And you could get it to go back to the exact same pressure. And we kind of guessed all this ahead of time. You guys are really good about guessing it all ahead of time, so I won't walk through it in too much detail again. But you can walk through there if you didn't quite guess it ahead of time and play around with it a bit. Okay. Now though, we're going to take in the last few minutes, and we're going to take all of what we just learned and we're going to combine it together. And then we're going to do a bunch of examples. And to me, you can do all of the examples from the ideal gas law and something we're going to drive off the ideal gas law. So I think it's a little bit better to teach it after we've learned everything. Okay. So this is going to be a combination of all of the things that we've learned. So up until now, we've said you can relate to a amount of time, and as one increases, the other one decreases, and it's proportional or inversely proportional, right? But we haven't really talked about what happens if we want to change lots of things. What happens if I want to add more moles, but then I want to cool it down? Or I want to, you know, increase the pressure, but also increase the volume? And the ideal gas law will allow us to do this. So what this is basically doing is putting all of the things together into one form of the equation. Now, there's this R that we've sort of shown up in some of our figures that we've been talking about, and we haven't really talked about. This is called the ideal gas constant, and it's just a constant that relates all of these together. So if you take all of the equations and you combine them, you get this. And if you go through and, you know, cancel everything out, you can go through and say, well, now I can see that pressure is equal to 1 over volume if all of these are held constant. Or volume is proportional to temperature if pressure moles, and of course R is always a constant, are held constant. This incorporates everything that we've just talked about into one equation, which is really nice. This is what R is equal to. You may have R memorized slightly differently. Anyone else haven't memorized it something different? 8.31, does that sound familiar? That one has joules in it. So that also R. But that one has to do with energy. So as far as which one to use, because you'll have both of them on the exam, how do you know in this one to pick this? Do you memorize it? No. Yeah, no is usually a good answer. That one is close to me. How do you know? Let's look at this a second. Well, what is pressure in? Atmospheres. What is volume in? S i unit for volume? Leaders. What about temperature? Kelvin. So are you going to want to use something that looks like this for units? Are you going to want to use something that has an energy unit in it? Yeah, this doesn't have any sort of energy going on. So we're going to use this one. Later on in the chapter when we get into kinetic molecular theory, then we'll start using the energy. We kind of already did that. So just make, and this is actually kind of what comes off this. Now as we go on to problems, there's going to be ones where you have to really worry about units and there's going to be ones where you don't really have to worry about units. If you need to worry about them, use the ideal gas constant. And I have the units written on the exam to help you remember what to use, right? It's got to be leaders, it's got to be atmospheres, and it's got to be Kelvin. Always always always. Even on the ones where I tell you you don't have to worry about switching units around too much, temperature has to be in Kelvin. If you see a temperature written down in Celsius, just put plus 273 next to it. Like you're reading through a problem and you see 62.4 written down, write a little plus 273 there so that when you fill it into an equation, you don't forget to do it. When these get kind of long, I'll start listing out what we know and what we're trying to figure out. Just put plus 273 next to every single temperature that's listed in Celsius so you don't forget. It's one of the most commonly missed things on an exam and on a short answer question, that's all your points, right? So don't, just be really careful about that. Okay, let's do one of these. So we have a general idea of what happens when we change things. But now we can also calculate the results of one thing based on all the other things that we measure. So if we have this, sulfur hexaflora is a colorless, odorless, very unreactive gas. Calculate the pressure exerted by 1.82 moles in a steel vessel of volume 5.43 as 69.5 degrees C. Okay, so if we look at our PV equals in our team, it's just right in a plus 273 here so that we don't forget. And what do we want to solve for? Well, we're looking for pressure. So we're not changing anything here. We're not going from one set to another set. So we can just solve for pressure. Okay? Now we can just fill everything in, making sure all of our units are okay. So we have 1.82 moles. We fill in R, making sure to fill in the right one. And we can fill in temperature. I'll admit I normally fill it into my calculator just like this. If you do that, you have to watch out for parentheses though. It may not be a bad idea to fill in the added number here and do it in one step or two steps. And then we solve. And we get this. Okay? So that's how you do an ideal gas law that doesn't change around where you're not changing things. You're not going back and forth. Okay. So let's see. We have two minutes. So see if you can do this in two minutes. And I'll give you the answer. You guys see if you can beat me to it. You're going to do it exactly the same only you're solving for what now? I would suggest rearranging the equation first. Especially for these, that's your best bet, especially when they get harder. Down to one minute, so I'll get you started on the equation. Looks like you're on the calculator point, so I'll fill some things in for you. All right. Do we have answers? What do we think? Do it prices right style. What's the first digit? We get 9.28 liters. Okay? So that's how you go about doing those. Next time what we'll do is we'll show you ways to calculate when things are changing. So when we're going from one thing and then one set of conditions, we change the set of conditions and what we get for that.
UCI Chem 1A is the first quarter of General Chemistry and covers the following topics: Atomic structure; general properties of the elements; covalent, ionic, and metallic bonding; intermolecular forces; mass relationships. Index of Topics: 0:02:27 Gases 0:06:26 Pressure of Gas 0:07:25 Barometers 0:14:25 What is an Ideal Gas? 0:22:09 Boyle's Law 0:26:02 Charles Law 0:33:19 Avogadro's Law 0:37:02 More Predictions of Gases 0:38:05 Ideal Gas Law
10.5446/18980 (DOI)
Okay, so moving on with ML theory. So we were on this slide going through the examples. We hadn't really started on the examples yet. Why don't normal gases form diatomic molecules? So we had sort of touched on this one already. But let's actually draw out a couple of these diagrams. One to give us some practice, remind you of how to do it, and two to do this explanation. Odds are if I were to ask this on an exam, I would probably ask you to give me some reasoning for it that would involve the MO diagrams. So we answered it last time quickly. Now I want to go a little bit more in depth. So let's just look at a couple of examples. Let's take the first two, since that's really all we've been able to deal with so far. So let's look at helium and neon, just so that we can use these to explain it. So first of all, for helium, we would draw out the diagram. We always draw out the atomic orbitals, too. It's not correct if you don't have the atomic orbitals. You notice I tend to draw brackets instead of dotted lines. Honestly, it's just because I have a hard time drawing perfectly straight lines all the time, and it gets a little messy. The brackets to me are a little bit cleaner, but it really makes no difference. I'm going to set it up like that. Now labels. Don't forget to label everything. A lot of you guys did that on your atomic energy level diagrams on the first test. Don't do that here. So this will be a 1S. A 1S. As with all charts, we need to label our axes, so we're going to label that and say that's energy. Now we have to decide what we have here. So we're combining the S orbitals, and what do we get out of that? Sigma. So am I done labeling? What did I forget? I forgot a star. Good. So now am I done? Good. So you have to put the subscripts in. So I realize that gets a little tiny since I'm doing subscripts of something small already. So that subscripts those 1S because it comes from the 1S orbital. OK. So that's sort of our basic outline. Now how many electrons does helium normally have? Just one helium? Two. We can go like that. Now we fill them in. We don't really have to worry too much about where everything comes from. We just, we have how many electrons total do we have? Four. So we start lower high? Low. Low. One, two, three, four. OK. Anything else we need to do to make this diagram complete? Nope. Where's that? So let's look at this a second now and say, well, why would this or would this not form? Why could we form helium 2 plus? To do that, let's look at the bond order. And a couple of different ways of looking at this, the most formulaic way is to say, well, it's one half times by the bonding electrons minus the anti-bonding. So we'll figure that out. So it's one half. How many orbitals in bonding or how many electrons in bonding orbitals do we have? Two, right? Just from here. And then how many in anti-bonding? Just from the ones with the star. So it's two. So what's our bond order? Zero. Well, if it's a bond order of zero, is that forming a bond? No. So for that noble guess, there's not going to be a bond. So it's not going to form that. Now, of course, at this point, you're probably saying, well, Nia, no, do the same thing. But let's go ahead and make it just for some extra practice. So let's just do valence. So I only want the two. I only want the n equals two levels. So if we do that, we first have to draw out our atomic orbitals like this. Sorry, that should be a two. Two s and two s. And since I put valence up here, we're only going to draw out the n equals two level. Now we have to sort of make some decisions here. So down here, if we combine these, what are we going to get? Two orbitals, four orbitals, three orbitals. Two. One will be high. One will be low. What are their names? Sigma and sigma star with the two s labeling on the bottom. So since I realize it's getting small, two s and two s is down there. OK, now we have to make some decisions. When we come up here, we know that we're going to have how many molecular orbitals? Six. But we have to make a decision about which one's going to be the lowest. Is the sigma going to be lower or is the pi one's going to be lower? In this case, it's the sigma, right? So remember that there's sort of that flip-flop that happens between nitrogen and oxygen. So oxygen, fluorine, and neon are going to have this pattern. OK? Where your sigmas are on the outside, your top and bottom. And remember the subscript down here now is going to be two p because it comes from the p orbital. And up here, two p because it comes from the p orbital. And then this, these will both be pi's. And don't forget your star for your anti-bonding orbital. And then the subscripts will be two p and two p. So we have sigma two p star, pi two p star, pi two p, sigma two p. OK? Now, Pabel asked me, does it matter how far the energy levels are apart? Because in my slides from last week, I showed you that nice diagram with them all laid out and I showed you where they were kind of incrementally going down and then flipping with these bottom two. For drawing these, it doesn't actually matter so long as you get the order right. I can't ask you to gauge, put these closer than these or something like that. That would be way too hard to gauge. And it's kind of beyond the scope of this class. The only reason I showed all those little differences in the energy levels when I showed it to you on the slide was so that you could get that idea that you were having that these energy levels, it wasn't just a hard flip-flop in between that nitrogen and oxygen. That it was sort of a gradient and it's just where it happened to flip. OK? But as far as drawing these, don't worry about where the energy levels are. Now something that is important, and we won't, I mean, I don't want you breaking out measuring sticks or anything. But these should split about equally across from the P. This should be about the same distance as that and this should be about the same distance as that. Again, don't break out the measuring sticks, but or rulers. But make it look about like this. You don't want your P orbitals to be way up here or your P orbitals to be way down here. They should be in the center. Just like here, our two S's are kind of in the center of our two sigmas. So just try to keep it approximately symmetrical. That's the only thing I care, is that your atomic orbitals should be in about the center of the energies. OK. So now we get to fill everything in. So how many valence electrons is in the 1,5? So we fill it all in. So of course, when we go to do this, we have that. And if we figure out our bond order there, well, we have the exact same number in bonding orbitals as anti-bonding again. So we can go ahead and count them and say how many are in bonding orbitals? 2, 4, 6, 8, and then 2, 4, 6, 8. So our bond order is once again 0. So if I ask you, well, why don't these noble gases form diatomic molecules? And I ask you to use a mode theory to explain it. I mean, there's other ways of explaining it, of course. You could just use a sort of pre-Midterm 2 logic, so like the stuff that we learned during Midterm 1 period, and say, well, that's because they have a full valence shell and they're stable. If I ask you to use MO theory to explain it, you could say, well, it's because their bond order is 0. So if you were to make this, you would see that you have no bond. Just different ways of explaining the same sort of phenomena. OK. Let's go back for a sec. So MO diagram shows bond order of 0. A lot of work to get to that answer, but it's good review. I sort of wanted to do some review on drawing MO diagrams and doing it from scratch. That is a sort of MO diagram that you will be expected to be able to do next Friday. OK. All right. Which species has the longer bond length? N2 or O2? So this again falls under the category. There's a couple of different ways you can figure this out. There are a couple of different ways that you can explain it. However, let's go ahead and explain it with MO theory. So now that we have drawn this from scratch a few times, I'm going to cheat for the sake of time and move back to this slide. OK. So let's look at O2 and N2, since that's what I asked you to look at. OK. So we're looking N2 and O2, and we're saying which one has the longer bond length. So what do we know about bonds? If we have a really, if we have a triple bond, what sort of, is that going to have a short bond length or a long bond length? Short, right? It's going to be short. It's going to be holding everything closer to each other. So a single bond would have a longer bond length. So you basically just rank them in order of their bond order. So for oxygen and nitrogen, you have a bond order of three here and a bond order of two here. So which one's going to be your longer one? Oxygen. Oxygen will be longer, nitrogen will be shorter. OK. Again, if I ask this on a test, I might ask you to draw the MO diagram too, but you would just be able to draw these two. OK. So for carbon monoxide, I say that the P orbitals combine in a way where the energy is pi sigma pi star sigma star. So I told you the order of the energies, and I say draw the MO diagram, and then I asked what the bond order is. So this is the way that I would go ahead and I would ask you to draw a diagram of a heteronuclear diatomic or two different nuclei. So let's go ahead and just draw this from scratch, mostly because I think it's good practice. We have it up on the board too, but I would rather do it from scratch. OK. So I say CO, and I say draw the MO diagram. I had to tell you the ordering. So now we know that we can set up our diagram to look like that of what carbon normally looks like, where I say pi sigma pi sigma. OK. That ordering comes from what I told you on the slide. It comes from the fact that I say that the ordering is pi sigma pi star sigma star. Your pi will always go with the two orbitals. So now we go through and we label everything just like before. We don't forget to draw our axes or label our atomic orbitals. And we don't forget to put in all our subscripts. So 2S down here and 2P for all of these. So now we can fill in our electrons. How many valence electrons does carbon have? Four. So on the carbon side we put four. Now how many electrons does oxygen have? Six. So on the oxygen side let's put six. All right. OK. So now going back to what the question asked us. I said draw the MO diagram and tell me the bond order. So this would be the MO diagram so far. Now we need to put our electrons in the center. So we don't really have to worry about where the electrons are coming from here. We just have to know how many we have. So how many electrons do we have to work with? Four from the carbon. Six from the oxygen. So how many total? Ten. So we just start at the low energy and work our way up. We don't care whether they came from the carbon or whether they came from the oxygen. We just start down here. OK. So we have that. So now what is the bond order? So we have to count of everything that's in our bonding orbitals, all our electrons, and everything that's in our antibonding orbitals. So bonding orbitals first. Two. Four. Six. Eight. How many are in antibonding? How do we know that something's in antibonding orbital? Has a little star. So which are the only ones that have the star here that have electrons in it? This. So how many electrons? So what is our bond order? So for CO I could ask you what the bond order is and ask you to use the MO diagram to explain it. Now while we're here and we have this, let's talk about ions one more time just to sort of cement that. So if we were to make this just because I don't necessarily want to redraw the atom, let's say we tried to make this a negative. So new problem. Let's make this a CO minus. So how would we do that? How could we make this a CO minus? Add another electron, right? So now let's do that. We'll make this a CO minus. You may want to make some, you might want to actually end up redrawing this just for the sake of your notes. Okay. So now we have to add an extra electron. The MO diagram part of it is kind of the easy place. Where in this case will we put the electron? The first spot we can. So we can't put it in here. So we'll just put it in here. Now we have to make a little decision about where we're going to put it as far as the atomic diagrams go. Are you going to want to put it on the carbon or the oxygen? So it's an electron. So we're going to want to give it to the one that wants electrons or doesn't want electrons. Wants electrons. What's another word for wanting electron density? Electronegativity. So where would we want to put it on the most electronegative or the least electronegative? Most which is oxygen. So we just go ahead and add it there. Now let's look at what that does to the bond order. So before we bother calculating it, which we'll do in a sec, let's think about what it would do. We added an electron to a bonding orbital or an anti-bonding orbital? Anti-bonding, right? It has a star. So we added it to an anti-bonding orbital. Does that mean it's going to add to the bond order or take away from the bond order? It's going to take away, right? It's anti-bonding. It stops it from bonding as well. Now does that mean our bond order is going to go up or down? Down. Anyone want to guess by how much? Half, one, two, three? Half. Right? An electron adds or subtracts half of a bond because it takes two electrons to make a bond. So guess before we fill into the formula, what is it going to end up being for a bond order? 2.5. All right. Let's see if we're right real fast using the formula. We are, but we'll check it. So now we have 2, 4, 6, 8 in our bonding orbitals and then 2, 3 in our anti-bonding. And so we get 2.5. Okay? So this is sort of a reminder on how ions work. And an example of using an ion in a heteronuclear situation where you have to decide where the electrons go here or here. Okay. And if we were to do the plus version, what would we have done instead of adding an electron? Take away an electron, right? I chose to do minus here mostly because it was simpler to do on the page. Well, actually good question though here. If we were to do the plus, where would we have taken the electron away over here? Would we have taken it from the carbon or the oxygen? The carbon, right? We're taking it away from the least electronegative. So keep that in mind too. Okay. So that finishes that slide. And this just shows the same thing that we just did. Again, make sure in your notes you sort of differentiate that we added in a problem here. Okay. And the next slide is sort of a checklist for you. Things that while grading exams, everybody forgets to do when drawing a mo-diagrams. So keep this in mind when you're going through and you're drawing these. When you're done drawing one on the exam, I mean, who knows if you'll have one on the exam or not, but you know, when you're done drawing one on the exam, make sure you check these. Did you label all the atomic orbitals? That's those 1s and the, or 1s, 1, 2s and 2p's. Last people forget to label those. Did you label the molecular orbitals? Did you add the star to the antibonding orbitals? That is not a point you want to lose, right? Because you forgot to add the stars. Make sure you add the stars to all the antibonding orbitals. Did you add or subtract the electrons appropriately if you have an ion? Don't forget if I give you an ion, you need to make sure you go back and you say, well, I have an extra electron or I have one less electron when you're drawing these. Did you use the proper order for the orbitals for the MOs? So what I mean by that is that issue of where you have oxygen, nitrogen, or yeah, oxygen fluorine and neon having one sort and then all of the first ones having a different ordering where that pi and sigma are flip-flopped, that's what I mean right there. So just keep this little checklist in your head of things that you need to go back and check every time you do it. And along with this, don't forget your subscripts, okay? I want to know where they come from. Did they come from your 1s? Did they come from your 2s? Did they come from your 2p? Sometimes when you see these, you'll also see the P's with a different label, the X, Y, and Z. I don't really care about that. What are those referring to? Let's talk about it a minute, but it's talking about your axes, right? Whether you're in your X, Y, or Z. I don't think that matters to us quite as much because it depends on how you set up your axes. No one says you have to set your axes a certain way, so I'm not bothering with that too much. Okay. So we've done an example of this already, but now I want to actually talk about it a little bit more in depth. Don't get too bothered by this equation yet. I don't want you guys zoning out on me because of it. Okay. So when we were adding and subtracting our orbitals for our homonuclear diatomics or things that had the same atoms bonded together, H2, F2, O2, all of those, all of those orbitals are the P orbitals from one oxygen different in energy from the P orbitals of a different oxygen? No, right? Why would they be any different? They're on the same atom. They're had the same energy level. Now what do you think, just taking a guess, if we take carbon and we take oxygen, are those energy levels going to be the same? Well, let's think about how would you know this? The only time that we've really sort of done energy level calculations was when we did the Rydberg equation. And we can't do the Rydberg equation for these because they have lots of electrons and that doesn't work. But was the Rydberg equation, did we get the same energy levels for something like hydrogen and helium? No, right? Because we had to, there was a Z in there. We had to figure that into the calculation. We had to say, well, there's Z. So even in just a one electron system, is the energy levels of something like an oxygen or let's stick with lithium. A lithium and a hydrogen the same? No. So do we think that a carbon and an oxygen's energy levels would be the same? No. This factors into when we're trying to figure out where these orbitals are. So if we have the p orbitals and they've just drawn it as one line, which I don't really love. It really kind of should be three lines. They've just drawn it as one for the sake of saving space and assuming that you know that there's three p orbitals and same thing here. They add a little differently. And so what you end up getting is this huge mishmash of where all these orbitals fall into. Okay? This is not something that I want you to replicate. It's just something that I want you to sort of understand. So if you have a polar covalent bond, they're going to be shared differently than if you have a nonpolar covalent bond. That's what I want you to get out of the slide. Okay? So for that section of the book, which is the main reason I bring this up, just know that for all of these sorts of situations where you have a polar covalent bond, they're going to be shared differently than a nonpolar covalent, or a nonpolar covalent bond. That's all I really care that you get from this section of the book for now. If you go on to physical chemistry, you'll learn a lot more about this. But just for now, know that they don't add the same way. That's what this equation right here says. They don't add the same way. And we're going to sort of leave it at that for now. Any time that I ask you to do one of these heteronuclear diatomics, I'll tell you the ordering of the energy. I'll do what I did here. Where I say that for the P orbitals, you're going to get this, you're going to get that. Okay? And so that's all you'll need to know. That's how sapling does it as well. If you even, hopefully, going through and doing your sapling homework, they do the same thing. I think you have to figure out NO and CO at some point in that homework. And they'll tell you the ordering. They'll say, here's sigma, here's pi, here's this, here's that. And you'll just fill in the electrons. It's the same way for the exam. Okay. Now, something I should mention here since we aren't doing it. And you have the, there's a whole section on the book where you have this sort of thing worked out for water and benzene and a few other ones. We're not going to worry about polyatomic ions and, or polyatomic molecules and, and all theory. That really goes beyond the scope of this class. So don't worry about that section of the book. It's interesting. And if you're going to go on in chemistry, even into organic, you should skim through it and you should look at it. But it's not going to be tested. Things that you do need to be able to do for the exam. You have to be able to do the homonuclear diatomics from scratch. You know, hand you a blank sheet of paper and say, draw me oxygen and tell me something about it. You know, finding the bond order of it. That you have to be able to do. For the heteronuclear diatomics, I'll draw you the energy levels or I'll just tell you the ordering the way sapling does it. And then you'll be able to draw it from the orderings and fill in all your electrons. Make sure you know how to do ions with both of these, right? Both of these are ions are fair game. So make sure you know how to do that. Okay. So, moving on to chapter five. If you, depending on iteration of the notes you printed out, you may have some extra slides in there. Just cross those out. Okay. So chapter five. We're only going to cover 5.1 through 5.6 in this class. All the rest is going to be taken care of in 1B. I think it's the first thing you hit in 1B too. But there's a section of this chapter that works really well with what we were talking about with dipoles and covalent or polar covalent molecules and things of that sort. And it really works better to sort of fit it into there before moving on. And so that's why we cover it here. So it's just the one section. So what we're going to get into here, and you kind of want to do a mind shift here into thinking about what we were talking about when we were talking about the dipoles and what makes a molecule polar. Because we're really sort of doing a 180 in thought processes here. So we're back to Lewis structures and dipoles. Okay. So the main part of this chapter that we want to talk about here is what is an intermolecular force. So let's break down the word because most things in science, you can sort of break down the word and get an idea for what's happening. Enter. That's between different things, right? Internet is between different things. What is the word for between something that's the same, within one thing? Intra. So that'll come up here or there where we'll talk about intra molecular forces. So keep that in mind too. But inter is between different things. Molecular as relating to molecules, right? So intermolecular force is going to be forces between two different molecules. So it's the idea that you have two different molecules and those two different molecules are going to interact with each other. And they're going to interact with each other in different ways depending on what those molecules own properties are. But all molecules are going to interact with each other the exact same way. Okay. So we have a bunch of different types of forces that we're going to be talking about here. So the first or what we've sort of covered up until now mostly is intramolecular forces. These are bonds that are within a molecule. So that's where this becomes intra. These are covalent bonds. We've already talked about these. There's not really a huge amount in this chapter that we're going to talk about with the intramolecular forces here. For intermolecular forces, now we have a bunch of new ones. And these are all going to be weaker than a covalent bond. A covalent bond is relatively stable, right? If you have this covalent bond formed, you have to really do something to break it apart. The reason it formed was because it was more stable. Now intermolecular forces, these are much, much weaker. These are going to be a little bit more transient. You'll have them, but they're going to be switching between different molecules, let's say. So these are your four different ones. As is normal, I sort of have them all listed out for you here, and we're going to go into each one in more depth in a minute. But we're going to have dipole-dipole forces. We're going to have hydrogen bonds, which is actually just a really particular type of dipole-dipole force. And then we have ion dipole and dispersion forces. So what you want to get out of all of these different slides is what sort of molecules will form these particular ones, what type of strengths you're looking at, because not all of these are going to be the same strengths. And then we'll also get into the idea of what is that going to do to its sort of bulk properties, for instance, boiling point and melting point. What's going to change about things of that sort based on these forces? OK, first one we want to talk about, dipole-dipole forces. So these are attractive forces between polar molecules. So if you have a polar molecule, this is the sort of force you can get. So if I were to ask you, does this molecule have dipole-dipole forces, the first thing you're going to ask yourself is what? Is it polar? And if the answer is yes, well, then it has dipole-dipole forces. If the answer is no, then it doesn't have dipole-dipole forces. Now the way that these work is that it kind of, you can think of it as all of these molecules being little tiny magnets. If it has a dipole, that means that one side has a positive charge and one side has a negative charge. So what is the positive side of one molecule going to want to do to the molecules around it? Attract the negative side or the positive side? Yeah, if it's positive, it's going to pull the negative side toward it and it's going to push the positive side away from it. And so what happens is they're sort of attracted to each other because of this. Now these are sort of, these are going to be the strongest except for hydrogen bonding. So hydrogen bonding is actually where you have hydrogen bonded to particular atoms and you have a ridiculously strong dipole right there in that little general area. And so it's sort of an extension of the dipole-dipole force idea. So this is going to have increasing strength as the dipole of the molecule increases. What does that mean? So that means if we have a molecule that's really, really polar and we have a molecule that's not very polar, the one that's really, really polar is going to have more dipole-dipole forces. Okay? All right, I'm actually seeing some confused looks about that. So let me, let me do something there. What I mean by that is let's say we have something like, let's draw this out like a line structure since you haven't had a lot of practice with line structures. Say we have that. And we'll do a different one. And let's say we have that. Okay, so what I meant by comparing the dipole-dipole forces and their strength is that first you would have to decide which one has a higher dipole. First of all, do we think that each of these is going to have a dipole? Are they polar? So we have a carbon in three Hs, a carbon and a carbon in three Hs. Don't forget your rules for line structures. So let's draw this out a little bit expanded in case you've forgotten how your line structures work. We'd have that. And then over here. Sorry, that should just be an H. We would have that, right? So that's sort of expanded out. Don't forget how to do that. That was the thing that I kind of sent home as a homework lecture in a video. So you can review that as much as you like. So what do we think? Does that have any polar bonds in it, first of all? That's our first question. The carbon to the oxygen, that's polar, right? Does this have any polar bonds? Carbon to the bromine. So we get a dipole going toward the more polar or the more electronegative atom, right? We know that they're polar bonds because we're looking at the difference in electronegativity. We're saying that has a high electronegativity, that has a low electronegativity. High electronegativity, low electronegativity. So now to decide which one has more dipole forces, we'd have to decide which dipole is stronger. So which one's more electronegative? So I tried to set these up as close as possible to the same sort of atom. Which one's more electronegative? Oxygen or bromine? Oxygen. So which one would have the greater dipole? Oxygen. So which one has more dipole forces? Oxygen. Okay, so that's what I mean here. Increasing strength is the dipole of the molecule increases. Let's do one more example. And even simpler. Well, actually we can't do that one yet. Never mind. Okay, so this sort of sums up dipole-dipole force. So if I ask you if something has a dipole force, you say is it polar? Or more likely I'll give you an atom or a molecule and I'll say what sort of intermolecular forces does it have? And you'll say, okay, well does it have a dipole? Yes, so it has dipole-dipole forces. Okay, so the next one that we're going to talk about is if we take this to the extreme, hydrogen bonds. So what do we know about oxygen, nitrogen, and fluorine? What is special about those three atoms that has something to do with this? Has very high electronegativity, right? There's three highest on the periodic table. Now, so something happens if you take and you bond a hydrogen to the oxygen, nitrogen, and fluorine. And what happens is you get something called hydrogen bonding. What's really important here is to notice what is and isn't the hydrogen bond. This is an intermolecular force, right? Inter. So that means between the same molecule or within one molecule or between different molecules. Different molecules. One molecule of water and another molecule of water. One molecule of ammonia, another molecule of ammonia, or one of each, either way. So what happens here is you get hydrogen that is covalently bonded. So that means you have an actual bond, something like this bond in water that's covalently bonded. So this hydrogen and this oxygen are covalently bonded to each other. Now, if you take hydrogen and you bond it to an oxygen, nitrogen, or fluorine, in other words, your three most electronegative elements, you'll get, you'll have an atom or molecule that is capable of hydrogen bonding. But don't forget it's occurring between two different molecules. Now, so right here, let's look at this a second before we move on. If you have two water molecules near each other, what ends up happening, the way this ends up working, is that your oxygen molecule is going to have a little bit of a negative charge, right? Because oxygen is very highly electronegative. It's stealing the electron density away from this hydrogen that's right here. So you get a little bit of a negative charge here and a little bit of a positive side, a charge here. So what will happen is just like in the dipole-dipole forces, you'll have this attracting to this. You have a negative attracting to the positives. And then we could draw a whole bunch of these and say, well, you have a positive here, attract that to another atom's negative, another atom's positive, and just kind of zigzag all the way around. But if I draw you this and I say point to the hydrogen bond and you point to this, are you going to get your points? No. That is a covalent bond. That is not a hydrogen bond. This is the hydrogen bond. And it's not really a real bond, right? It's these intermolecular forces. So this hydrogen bond is between one atom's positive or delta positive side and another molecule's delta negative side. So make sure you remember that. Two different molecules. Now something like ammonia can do it too, where you have NH here, NH here, and NH here. So you have these three bonds. Your hydrogens all take on a little bit of a positive charge. Your oxygen or your nitrogen takes on a little bit of a negatively charged. And the negative side of the nitrogen would be attracted to other sides of the other, if we were just looking at ammonia, other positive sides of other ammonia atoms or molecules, other ammonia molecules. Now I put this picture in so that you can see that you don't actually only have hydrogen bonding between exactly the same atoms or molecules. You might have, if you have a mixture of ammonia and water in some situation, you could actually get them hydrogen bonding to each other. That's fine. Now the more hydrogen bond donors and acceptors, the higher the difference in properties. So this means something like water, where you have two, is going to be, it's going to have a very high amount of hydrogen bonding, where maybe something that doesn't have as many hydrogen bond donors or acceptors would have less. And you have some really big examples of that when you get to your worksheet, where you can go through and you can circle all the places where it's possible to hydrogen bond in your discussion. Okay. The more hydrogen bond donors and acceptors you have, the higher the difference in properties. So this is what I was talking about in that last little section. I just sort of want to talk about it a bit more. So here we have some ball and stick models, where all of these gray areas are your carbons. So carbon, carbon, carbon, carbon, carbon. And then the reds are your oxygens, and the whites are your hydrogen. Just a different way of representing molecules that I thought I'd put up for you to see. So what we're going to do here at the end is we're going to say, well, all of these different forces affect your boiling point. The more forces you have, the more your boiling point's affected. So something like this molecule, how many hydrogen bond donors and acceptors do you have? You have this section where you have a hydrogen and a hydrogen and a hydrogen that's bonded to an oxygen. So three. And then you have an acceptor here, an acceptor here, and an acceptor here. What I mean by acceptor and donor? This is a hydrogen that's being sort of donated to this bond, so it's the donor, and this is the acceptor. So here we have three. Notice you have a really, really high boiling point, 290 degrees Celsius. Here how many do we have? Two, right? One here and one here. And we have a little bit lower of a boiling point. Sure, it's still high, but it's lower than this one. Now we look at water where we just have two of the donors, and there's the two lone pairs, so arguably two acceptors, but you have less. It's 100. So we go from one of these situations to two to three, and we go from 100 to 188 to 290. So we haven't exactly talked about this boiling point phenomenon in detail yet, but this is how we're going to sort of decide which ones have more intermolecular forces is which ones have the higher boiling point. So we'll do that next time. So the last few types, we have two more to get through, which we're obviously not going to finish up today, but let's talk about ion dipole forces now. This one's very similar to dipole-dipole, except now you actually have an actual ion in there instead of just two molecules with a dipole. So if we have an ion, do we think that's going to interact with a polar molecule? Well, sure. So thinking magnets again, right? If we have a positively-charged ion, what is that going to do to the negative side of another molecule? It's going to attract it. If we have a negative ion, it's going to repel the negative side. So whenever you have a situation like this, a positive ion attracts the negative side, the negative ion attracts the positive side. Sorry, that was not supposed to click off. So this is very similar to a dipole-dipole force. Now, kind of take note here, this is normally going to occur in what sort of situation? One compound, two compounds, two compounds, right? A good example of this, where you also have some hydrogen bonding going on, but it's something like salt and water. This is why salt dissolves so easily in water, is you have positive ions and you have negative ions, and those positively-charged and negatively-charged ions in the sodium chloride, the water is going to come in and surround them, and it's going to make it really easy to actually dissolve the salt. So this is sort of one of the reasons why this like-dissolve likes exists.
UCI Chem 1A is the first quarter of General Chemistry and covers the following topics: Atomic structure; general properties of the elements; covalent, ionic, and metallic bonding; intermolecular forces; mass relationships. Index of Topics: 0:00:18 Practice 0:10:54 MO Theory 0:12:56 Working Through Carbon Monoxide 0:20:37 When Drawing MO Diagrams 0:22:31 Heteronuclear Diatomics 0:27:17 What is an Intermolecular Force? 0:29:27 Types of Forces 0:31:14 Dipole-Dipole Forces 0:37:10 Hydrogen Bonds 0:43:52 Ion Dipole Forces
10.5446/18978 (DOI)
So, we're sort of at this point in the slides. So we've done some hybridization. Now we need to actually take what we learned last time and apply it to a bunch of examples. Okay? Now, we're starting a little bit from scratch here. We'll go back to your other Lewis structures that we drew here in a little bit too. But I thought it was kind of a good idea at this point to make some new structures and not just go back to our Lewis structures right away. This reminds you to, you know, how to draw Lewis structures and how to think through everything so we can kind of go through everything at once. Okay. So, if I want to know the hybridization of the carbon on CH4, can you just look at this and tell? Maybe. Okay. I've heard yeses and noes. Well, if you're one of the yeses, great job. You probably can and that's fine. But if you're one of the noes, which is probably more likely, how do we go about doing this first? Lewis structure. Good. Our Lewis structure is always going to be our first step. So if you take, you put carbon in the middle, one it's listed first, which is normally a good clue, not always, but a lot of times. Also it's hydrogen, can't form multiple bonds, so carbon's got to go in the center. We'll count up our electrons just to make sure that everything works out okay. Which at this point you may kind of start skipping over with the simpler ones. Just make sure you do it for the more complicated ones. We draw our skeletal structure. We make sure that everything has an octet or in case of hydrogen or duet, we're set. We have all of our electrons accounted for, so we're good. So that's our Lewis structure. All right. So now we need to figure out what kind of hybridization is present. So what is the first step to figuring that out? Finding out our steric number, because that tells us how many hybrid orbitals we need. So what is our steric number? Four. So how many hybrid orbitals do we need? Four. So if we need four hybrid orbitals, how many orbitals are we going to have to mix together to get those? Four. So we start with our low energy, which is an S. We count up, so we need some P's. How many P's do we need to get four? Three. So that's an SP3 hybridized carbon. Now when you look on your slides, the other part of that question was what is each atom used to bond with? So we haven't completely talked about that explicitly, but we've taken these orbitals and we've made these new hybrid orbitals in order to form bonds and to put lone pairs in. So let's look at the hydrogen first, because I think hydrogen is a little bit easier here. What is hydrogen using to bond with? It's using its outs orbital. It's the only orbital it has with any electrons in it. If you were to look at the electron energy diagram for it, it just looks like that, right? It's the only thing that we really have to work with. We just have the 1S electron, or the 1S orbital to work with. Now the more complicated one, what is carbon using to bond with? We have any P orbitals anymore. How did we make our SP3 orbitals? We took our S orbitals, we took our P orbitals, we mixed them together, and we made SP3 orbitals. So do we have any S orbitals left in the carbon? No. Do we have any P orbitals left in the carbon? No. What orbitals do we have? SP3 orbitals. So what orbitals are carbon using to bond with? SP3 orbitals. Good. So carbon bonds with SP3 orbital, and then hydrogen bonds with its S orbital. Okay. Next one then. Actually, just for fun, you sense it's a good review. We should do geometries too, right? Why not? What's the geometry on that? Tetrahedral. Tetrahedral, right? Four things are bonded to it, and so it's going to be a tetrahedral geometry. Is the electron molecular geometry the same? Yes. Okay. NH3. So first step, draw a Lewis structure. So we know that nitrogen has five valence electrons. Each of the hydrogens have one. So we have eight. So we draw this out. We put the hydrogens on. Are we done? No. Right? What does it have? It has a lone pair. Okay. So we have that done. All right. So now, what is its steric number? So what is its coordination number? Maybe that helps too. So coordination number is your number of bonds. So coordination number is three. And then your steric number is your number of bonds, and what else? Lone pairs. Or I should say number of things that it is bonded to, and lone pairs. So our steric number is four. Okay. We'll go in order of how we learned it. So let's do geometries now too. So what is our electron geometry? Good. Because that counts in the lone pair. And what's our molecular geometry? Wow. Not too much agreement here. We'll make it. I'm going to be a little bit lazy, and I'm just going to make my hand be one of the atoms, because it's a little bit faster. So if all of these were atoms, yeah, we'd have that. And now we take this and we put a lone pair there. So our base is a triangle, and it looks like a pyramid, so it's good. Trigonal pyramidal. Okay. Next part then. Hybridization. So we know it's steric numbers, four. So how many hybrid orbitals do we need? Four. So how many do we need to start with? Four. So it's going to be sp3. Good. One s and three p's, so that starts with four. We mix them all together and we make sp3 orbitals. Now do we have any more s orbitals on that nitrogen? Do we have any more p orbitals on that nitrogen? No. What kind of orbitals does that nitrogen have? sp3 orbitals. Okay. So what is hydrogen used to bond with? One s. Yep. What does nitrogen use to bond with? sp3. Where's the lone pair? What does nitrogen have? What's the only orbitals nitrogen has? sp3. Right? Does it have any s's? No. Does it have any p's? No. It only has what? sp3's. So it bonds with an sp3 here and an sp3 here and an sp3 here and then it has an sp3 with a lone pairing. So to be a little repetitive here, same as last one, hydrogen bonds with the s orbital, and then the lone pair is in the sp3 orbital. Crap. Yeah? Say that again? Oh, you mean for this part or the hybridization? The sp3 part? Yeah, the lone pair. So yeah, it's because we've taken all the s orbitals and all the p orbitals and we've mixed them together and made this whole new sort of orbital. So that's all we have left. And we know to do that because when we set this up, we want the lone pair and the three bonds to all have a place to go that are as far away from each other as possible. OK. Next one. PCL 5. So we draw this one out. And we put in all our lone pairs. And if we count up all our electrons, we'll see that we're all set. So now we get to go on and figure out our geometries and our steric numbers and our coordination number. So first of all, steric number and coordination number. They're the same this time. So what is that going to be? 5, right? 1, 2, 3, 4, 5. So what does that make our geometry? Trigonal bipyramidal. Good. And that's for both. OK. So now hybridization. So our steric number is 5. So how many, or how many hybrid orbitals do we need? 5, right? The idea here of being to space them all out as much as possible. And so we need to start with how many orbitals? 5. So we have sp3. That gets us 4. So how many d orbitals are we going to need? We have sp3d. OK. Next part then. What is everything bond with? So let's start with the chlorine. What does the electron diagram of chlorine look like here? Let me draw this out. So what is chlorine going to use to bond with? S orbital, P orbital, something else? It's going to use a P orbital. That's because that's all it has, right? That's only open spot for it to go. OK. Now what about the phosphorus? Just because the sign for phosphorus is P, I'm going to actually write it out. So what is the phosphorus going to use? So first of all, does it have any S orbitals, or should say valence S orbitals? No. What about P orbitals? No, we took all those away too. So what does it have that it has some electrons in that we can use to bond with? OK, d, but we're not going to use d on its own, right? What are we going to use? sp3d. We're going to use the hybrid orbitals. So that's what we use to bond with. We use the sp3d orbitals to bond with. All right. So that means that these bonds are made up of chlorine's P orbital overlapping with phosphorus's sp3d orbital. Is it always the case that your atoms on the outside are going to be using single orbital and the one in the middle is going to be using the hybrid? So with these setups like this, yeah, you can usually say that, because what's going to happen is these only are going to be bonded to one atom. Now that doesn't work if you move on to our very next example, because this one doesn't really have just a central atom, right? So if you're talking about the ones like we've been doing up until now where you just have a central one and then on the outside, yeah, the ones that are on the outside you can say are unhybridized. You could, in fact, try to make an argument saying that they're hybridized, but there's no real reason to do that. OK. So next one. CH2 or CH3, CHO. So this is sort of testing whether you can draw your organic structures now, right? So you can't forget about how to do those. So luckily they tend to give you hints on how to draw them by how they're written. So we have a C with three H's on it. And then that's going to be bonded to this C, which is bonded to an H and an O. Now, how did I know not to make that like an OH group? Try it. See if you can do it. If you try to do it, there's not going to be a way for you to bond the C to everything and get four bonds to everything and have everything work out perfectly. So go ahead and give it a shot if you want, and you can see why. OK, so we have this. Now, let's talk about each carbon individually, because we need to. So I'm just going to call this carbon alpha. And I'm going to call it, well, let's just call it carbon A and carbon B. So we have something to refer to and buy. So what about the carbon that we have labeled A? What is the steric number? OK, so just to find what's the geometry? Tetrahedral, right? Four bonds. OK, now hybridization. So we have four bonds. We need how many hybrid orbitals? Four. So we're going to need to take an S and three P's and combine them together. So that'll be SP3. OK? What is that carbon using the bond with? SP3. So if I point to this bond right here and I say, what overlapping orbitals makes that bond? You would say, from hydrogen, it is the 1S. And from carbon, it is the SP3. OK, now let's do the next carbon, which I've labeled B. OK, what's that steric number? Three, right? There's four bonds, but there's only three atoms that are attached. So that's our steric number. So that means that our geometry is what? A lot of disagreement again. We'll build it. So three things that we need to get spread out as much as possible. So you kind of in your mind rotate them around, see what happens. Yeah, and eventually you settle on this, hopefully. So you settle on that. It's a triangle, and it's all in one plane, so it's trigonal planar. OK, now hybridization. Steric number's three. So how many orbitals do we need? How many hybrid orbitals? Three. So what are we going to use? SP2. Good. What does that mean? You know, you're going to have to use the same number of orbitals, so you'll have to use the same number of orbitals. And what's that going to do? So you're going to have to use the same number of orbitals. And then you're going to have to use the same number of orbitals. SP2, good. 1s and 2p's to add up to 3. Now, for all of you so far, I've just been saying, what is it bond with? And you've been giving me an answer. Can I really say that with this one? I have to be a little bit more explicit, right? So let's say, what does it form its sigma bonds with? So remember, sigmas are any of your single bonds. And then I kind of think it is the first one, but I guess one of the bonds of the double bond. So what does it form its sigma bonds with? SP2, good. Those are its hybrid orbitals. You made those hybrid orbitals to set up that skeletal structure. So all of your sigma bonds are going to be made with your hybrid orbitals. Now, what other kind of bond is that carbon half? Has a pi bond. Now, what is your pi bond formed with? Well, what do we have left? You guys are right. But you're being a little too smart today, so I have to backtrack a little bit. So what do we have left over? We have a p, right? Because we had 1s. We had 3p's. So we took away an s, and we took away two of the p's. So we just have 1p left over. So now that we pulled it out of thin air or anything, that's just what we had to begin with and what we have left over. So that's formed with your p orbitals. Now, is that p orbital going to be overlapping with the oxygen end on end for the pi bond or side on side for the pi bond? You said on side, right? Sigma bonds overlap end on end. Pi bonds overlap side on side. OK. So that's for the two carbons. All right, so let's see. If I point to this bond right there and I say what two overlapping orbitals make that bond, you'd say from this carbon it is a? An sp3. And from this carbon it is a? sp2. Good. Now let's see what else can I ask you on that one. I could say how many sigma bonds do we have? One, two, three, four, five, and then any of those count? One of them does, right? So six. So we'd have one, two, three, four, five, six sigma bonds. And then how many pi bonds? Now, what if I asked you to, let's see, we could draw the line structure of this. Don't forget line structures. So far we haven't really had any that work well for it, but now we do. So to draw a line structure of this, we'd put our pen down and that's our first C. And then we go to here and that's our second C. And then now we have to put whatever is on the carbons. So we have an oxygen right here double bonded. So we would draw in a double bonded oxygen. Do I have to draw those hydrogens in? No. So the other way I could ask this whole set of things is to just draw you that. It would be drawn a little bit nicer. Let's draw it a little bit nicer. OK, draw like that. And then ask you all the same questions. So don't forget about the line structures. I talked about them in class and then I sent the video with the examples. So don't forget about those. OK. Now, so keep this molecule sort of in mind, maybe even still up, because I have better pictures of it. In a perfect world, I would just draw them all for you, but art isn't really my thing. So I made you some pictures on the computer instead. So this is the molecule that we just looked at, drawn out. Now I've drawn out the orbitals for you. So same molecule just on slightly different angles. I changed the angle of them so that you could see. And these pictures are online. So if you can't, don't try to draw them unless you really like drawing. They're all online. So if you have something like this, so what would this carbon right here be? What was it labeled in our picture? The A, right? That's the carbon A. That's the one that's sp3 hybridized, right? You have these four hybrid orbitals coming off. So then what are all the yellow ones? Your hydrogen S orbital, right? So it forms a sort of overlapping structure right here. So then you have this bond, which was your bond between your two carbons. So you have your overlapping sp3 and your overlapping sp2. Now, this then would be your second carbon. I actually, for whatever reason, I decided to draw oxygen hybridized, which I didn't need to do. I'm not sure why I did that. You can do it either way. So this overlapping p orbital, what would that form? That forms your pi bond. So I haven't drawn it in quite yet. But I'll flip to, oh, this is a different angle. So now you have the pi bonds. So you can see those. Is this pink one pi bond or two pi bonds? One pi bond, right? It's just like a p orbital. It's one p orbital, but it has two lobes. This is one pi bond, but it has two lobes. So that gives you sort of a picture of what we were doing with the Lewis structures. And it's what you kind of want to think about in your head as you're drawing those structures out. This is not something that I would ask you to draw. But I will ask you enough questions on it that I know that you have it in your head. So one more view for you? Maybe not? Apparently not. OK. So just sort of a list of things to remember as you're going through your homework and you're doing these problems and you're getting ready to study. However many unhybridized orbitals you start with is how many you end up with. So if you tell me something is sp2 hybridized, it's going to have how many hybrid orbitals? Three. Which we get to work backwards in that too, right? We need four orbitals. We know we need to do sp3 because we need four orbitals. Now hybrid orbitals are the atomic orbitals, right? So an atom takes its own atomic orbitals and then you form these hybrid orbitals from it. The bonds are formed by overlapping the atomic orbitals. But each atom kind of keeps its own orbital, right? It just overlaps them and that forms its bond. We're not combining an s orbital from this atom and an s orbital from this atom or anything like that. This is all for hybrid orbital, hybridization. Your bonds are formed when these hybrid orbitals overlap. Each atom gets its own hybridization type, right? When we looked at that last one, it didn't matter that one carbon was sp3 and one carbon was sp2. You go to each atom individually and you say, this one is this hybridization, this one is this hybridization. And they don't rely on each other. Your sigma bonds are formed from your hybrid orbitals. Or if you have an unhybridized atoms, they can be formed from your s's or your p's. But it's either s or p in your unhybridized atoms or your hybrid orbitals and your hybridized atoms. Your pi bonds are going to be made over from your left over p orbitals. And they're made from that side on side overlapping. So this is just sort of a list summary of things, of things you want to keep in your mind as you're doing all of these. To help you with the visualization again, because some of this is really hard to picture, I have some videos that are actually going to work for me today. So this is ethane. So it's two CH3 groups so that you can see it all. And we'll watch it one more time since it went by a little quick. So let's start over. So you have the two CH3 groups coming together. And notice they have the second little lobe there, which I hadn't. It's very difficult to draw in, so a lot of times they don't necessarily draw them in. But here you can see them all. And then you have the hydrogens coming in and bonding. So you can see where you get the geometry from. And you can see this. What is this bond called? Sigma or pi? That's a sigma, right? It's overlapping end on end. And then each of the other bonds between the carbon and the hydrogens are also sigma bonds. So then you can see it drawn out like that. OK, let's do ethane then. So ethane, if you remember from your fundamental section, that has a double bond. So it's CH2, CH2 with a double bond. So that bond right there is what bond? Sigma or pi? Sigma. And then that yellow bond is your pi. So that's how you get your double bond. And then you have your hydrogens coming in. So you can see how the geometries all work there in all different directions. Now those two videos, those aren't online just because I don't know the legalities. This comes from your book. So it comes from your Atkins book. So I don't have those posted online. But if you want to see them again, just stop by my office hours and I can replay them for you as much as you need. OK, we're going to go back through some of your Lewis structures. And now we're going to do it a little bit quicker. So we went through your other structures that we just did pretty quickly. We're going to try to go a little bit faster on the Lewis structures. And we're just going to go through, since we have everything else done up. And we're just going to do hybridization. OK, we're skipping nitrogen because it's not hybridized. So that's one's not so interesting. So this one. So what's our steric number? Three. So how many orbitals do we need? We need three hybrid orbitals. So it's going to be what hybridization? SP2. Now, what does it form at sigma bonds with? What does that carbon form sigma bonds with? The SP2 orbitals, right? So if I point to this bond, what is carbon forming the bond with? SP2. What is hydrogen forming the bond with? S, good. And what is the pi bond being formed with? Oh, well, OK, with oxygen. But what orbital? The P. So for carbon, the sigma bonds are formed with the SP2 orbitals. And the pi bond is formed with the P orbital. Now this one. What is the hybridization on boron? What's steric number? Four. Four. So it needs how many hybrid orbitals? Four. So what kind of hybridization? SP3. Good. And then what about the nitrogen? Same thing, right? One, two, three, four steric number, four hybrid orbitals. So it needs to be SP3 hybridized. All right? What is both boron and nitrogen? What are all of their bonds formed with? SP3 orbitals. What are the hydrogen formed? What bond? What orbitals does the hydrogen form a bond with? Good. All right. XCF4, and remember this is the F4 plus 1. So you have a couple that look very similar to this. Make sure you realize this is the 4 plus 1. OK. So what is our steric number? Four. So what is the hybridization? SP3. Good. SP3, good. Going back to this one. So we have NN and O. So we're really just talking about the one nitrogen here. What is the hybridization of that nitrogen? What's the steric number? Two. So how many orbitals do you need? Two. So it's SP. So it's SP hybridized. What is the sigma bonds formed with? OK. So let me say that a different way. What orbital is the sigma bonds formed with? So from the nitrogen, it's going to be SP3. From the nitrogen, it's going to be SP. Now what is nitrogen used to form the pi bonds here? Yeah, the pi bonds are formed with? P orbitals. How many sigma bonds do you have in this atom? Two. Or molecules, sorry. How many pi bonds? Two. So I'm trying to hit every different way I tend to test on this. So these sorts of questions I'm going to throw out to you. They make really quick test questions. All right, we have this one. So steric number equals six. So hybridization. We have an S, three P's. And then how many D's do we need to get six? Two. SP3D2. And that's as high up on the hybridization as you go. All right, the other one of these. XCF4, this time without the 4 plus. So what is our steric number? Six, right? Don't forget about your lone pairs. One, two, three, four, five, six. So that means that we have to have what sort of hybridization? S, P3D2. All right, it is counting orbitals. This one, BH3. So you have one, two, three orbitals that you need. So it's going to be SP2. One, two, three. So once you get the Lewis structure drawn, the rest of the questions are relatively quick once you get the hang of it. You want to kind of practice just going through and picking a random structure, drawing it, figuring out the steric number, coordination number, electron geometry, molecular geometry, bond angles, and hybridization of any molecule that you can find to draw. The best way to practice this whole section. All right, this one. So we have a couple of different ones to look at here. First, we're looking at the sulfur. So what's the hybridization on the sulfur? Well, actually, let's not talk about the sulfur. Let's just look at the oxygen for this one. So let's just look at the oxygen. What is the hybridization on that oxygen? These oxygens, that is. SP what? Right. Don't forget about these. I heard a few SPs in there. So that counts for these two orbitals, the one going this direction, the one going this way. But you also need to account for the lone pairs. So it's SP3. Now we have this one. CLF4 minus, CLF4 minus. So we have a steric number of six. So how many hybrid orbitals do we need? Six. So it'll be SP3D2. So we're going to just skip that one because it gets into having the P's and the D's there. So just don't worry about that one. That was why I kind of skipped it. So you can just put, don't worry about it in there. OK. Yeah? Does the triple bond have a difference earlier? Yes. We asked how many pi bonds there were. Was there one or two? The triple bond. So you mean for the HCN maybe? Let me look. Well, OK. So what was your question anyways? I'll answer without having it up. Does it have one pi bond? So if it's a triple bond, you'll have the one sigma bond there, and then there'll be a pi bond for each of the other ones. So it'll be two pi bonds. So assuming there wasn't other things going on in the molecule that had other pi bonds, that was just the one triple bond, then it would be two pi bonds. Yeah? That would be a question. But if we have something like this, structures, do we have to do that? Absolutely. Like that one? Oh, actually, so you mean like with the N and O1? Yeah. So yeah, if there's one that's more stable than the other, you have to stick with the most stable one. So you're just sticking with the most stable one. If you have something like this where they're all the same stability, it doesn't actually make a difference in the other calculation. So we can do this one just as is. So if we have this one, what is the hybridization on the nitrogen here? You'll do it the exact same way. So just because you have resonance doesn't actually change anything. So how many things do we have? Three. So what do we need? SP2. And then the P orbitals will be used to form the sort of third of a bond. Yes? No, if you could probably just do it. I think you could say what the use of the bond is. On this one? Yeah, we can also use the bond and put it like that and see out of the end. Yeah. Yeah, good. This is a good one to go through there. So yeah. So what he asked is, can we go through the part where we say what everything uses to bond with everything? So let's start with the fluorine. What does fluorine use to bond? Well, what does fluorine have? Let's just draw the valence shell. So if you draw this out, what does fluorine have to bond with? Yeah, it has this P orbital here. So fluorine uses P. Now chlorine. So what would chlorine use to bond with? Yeah, the hybrid orbitals. So chlorine uses SP3D2. Now, one other question I can ask here then that I haven't actually happened on a few of the other ones. Where are the lone pairs at? So where are they? What orbitals? Good, SP3D2. Lone pairs are also going to be in those hybrid orbitals. Yeah? Say that one more time? Oh, for the orbitals in the bonds? Yes, as long as you're talking about the sigma bonds. So yes. Anything else? Oh, yes. I want to talk about the orbitals. Yes. Anything else? OK. All right. So now we get to move on to MO theory. So this one sort of drastically differs from hybridization theory. They're both very useful. Just for different things in general. So in valence bond theory and hybridization, we kept overlapping atomic orbitals. We had orbitals from one atom overlapping with orbitals from another atom. Now, when we made those hybrid orbitals, we took all the orbitals from one atom, and you used those to make new atomic orbitals. Now what we're doing is we're taking atomic orbitals, and we're mixing them together to make molecular orbitals instead. So those orbitals are going to be long to the whole molecule. OK. So this is the quantum mechanical treatment of bonding. This goes into the quantum mechanics rather than just a general explanation. So your orbitals are belonging to your molecules, not your atoms, which is a big difference from the hybridization. Now a linear combination of atomic orbitals, what does that mean? So what that means is that you're adding or subtracting atomic orbitals together. Now remember back before the first midterm, how did we get those orbitals? How did we see what an S orbital and a P orbital look like? What did that come from? What mathematical thing did that come from? So it came from a wave function, right? It was your probability densities. So this is a linear combination of those atomic orbitals. The way they get that is by adding and subtracting those atomic orbital wave functions together. Now this treatment yields better agreement with experiment, so meaning that there's a few places where if you try to use hybridization and you look at what actually happens in real life, they don't really match up very well. MO theory tends to do a better job of it. MO theory is considered predictive rather than just when it's explenative. It actually predicts things as opposed to just explaining things. So why don't we use it for everything? Well, I've alluded to this here and there. It's really complicated. It's much more complicated than the other ones. And honestly, hybridization works really well in most cases. There's just a few cases where it doesn't. And you know, some then, of course, some much deeper places where it doesn't as well. But in general, that's kind of the gist of things. OK. So I'll go back a second. Everyone's starting. So a little bit before, in case you get ahead of me, in your book, when they start talking about more than the diatomics, we're going to take MO theory up through diatomics. And that's it. You'll be responsible for knowing all the homonuclear diatomics in the second row, meaning F2, NE2, those. You wouldn't be, and I'll give you hints and stuff as we go along for exactly what you will and won't be responsible for knowing. But just keep that in mind when you get to the very end of the chapter. I'm not going to hit on water and benzene and things like that. So don't worry about covering that. OK. Looks like a few are done writing. All right. So in hybridization, when we took three orbitals and we combined them together and made new orbitals, how many did we get back? Same number. Same rules apply for MO theory. Now, the only difference here is rather than combining atomic orbitals and getting atomic orbitals, we're combining atomic orbitals, mixing them together and getting molecular orbitals. But if you combine two atomic orbitals, you get two molecular orbitals. So it's still the same number that you get back. In one case, you're going to have the addition of two orbitals, and in the other, you're going to have the subtraction. Now, if you have the addition of two orbitals, we call this the bonding orbital. Any electron that you put in these orbitals, and I'll show you a picture of it before too long, maybe next time, any electron you put in these orbitals are going to add to the bond. They're going to make the bond stronger. The other one is going to be the subtraction of these two orbitals. If you put an electron in one of these, it's called an antibonding orbital, and that subtracts from the bond. It makes the bond weaker. So you have addition of the two atomic orbitals, and you have subtraction of the two atomic orbitals. And if you add them together, and then you add an electron into those orbitals, you add to the bond. That's called a bonding orbital. If you add an electron to an antibonding orbital, it subtracts from the bond order. So to sort of backtrack to our discussion on waves, because we're back into this concept of having things be waves, you remember our wave interference, and how we talked about things, how waves can add to each other, and how they can subtract from each other. The same thing is going to apply here. So with MO theory, we're treating our electrons as waves again. And we know that our waves have interference. We know that we can add waves together if they interfere constructively. Or as you add them together, they can subtract from each other if they interfere destructively. This gets into the difference between a bonding orbital and an antibonding orbital, whether one is interfering constructively or destructively. So adding them means that they're interfering constructively. Subtractive is meaning that they're interfering destructively. And we will end there.
UCI Chem 1A is the first quarter of General Chemistry and covers the following topics: Atomic structure; general properties of the elements; covalent, ionic, and metallic bonding; intermolecular forces; mass relationships. Index of Topics: 0:00:13 Examples of Types of Hybridization 0:25:34 Ethane Video 0:27:34 Back to Lewis Structures... 0:37:52 Hybridization of ClF4 0:39:55 Two Theories of Bonding 0:40:49 Molecular Orbital Theory 0:43:16 Bonding and Anti-Bonding Orbitals 0:44:59 Wave Interference 0:45:07 MO Theory
10.5446/18976 (DOI)
Okay. All right. So what's our first step when we're figuring out Vesper geometry? What do we have to figure out? Steric number, right? We need to figure out our steric number and one other sort of number. What is that? Coordination number. Good. So let's look at XCF4. So we drew this Lewis structure out before. We have it set up. Now what is our steric number here? Remember what steric number is? We're counting bonds or I should say things that are bonded to the central atom or whichever atom we're talking about along with lone pairs. So our steric number for this would be what? Well, how many things do we have bonded to the central atom? Four. How many lone pairs do we have? Two. So our steric number would be six. Good. Now what is our coordination number? That's the number of things that are bonded to it, right? So it is four. Good. So since we have a steric number of six, what does that mean our electron geometry would be? And remember that's, we're calling that EG. So just so you remember, that's kind of my shortcut for it. Six things bonded to it, so that makes it octahedral, right? Okay. So that's our electron geometry. That's acting as if we can kind of see the electrons. We're counting that as part of our geometry. Now remember, even though there's six things bonded to it, we call it octahedral because that's the three-dimensional shape that it makes, all right? Okay, so then we have to move on to what kind of geometry? Molecular geometry. And remember my shortcut for that has just been MG for the sake of speed and all that. Okay, so what shape do we have there then? So you're picturing sort of six things coming off a central atom, and we take away one of them, and then do we take away the one directly across from it, or one of the side ones? Take away across from it, right? There seems to be a lot of debate about that one, so let's just make this thing real fast. The level of debate to it is high enough I would like to make it. We're going to do it shortcut way this time, since we've already done it out with the molecules. So octahedral shape, six things bonded to it, right? So it's like three sticks straight across from each other. Now two of those are electron pairs though. So let's take away one. Let's declare this a lone pair. Which one's the other one are we going to take? Are we going to take one of these pointing at you guys or this one? The top one, right? And why do we do that? We want the electrons to push on each other equally, right? We want the electrons to be as far away as possible. So if we took one from here and then one from here, these two electron pairs would be next to each other, which is okay if we don't have a choice. But here we can put one here and one here. So what shape is that? Square planar, right? A square base and it's all on one plane, so square planar. All right. Next one. So boron trihydride. Okay. So we have our central atom and we have three things bonded to it. So again, sort of taking the shortcut with just sticks instead of the full molecules. We have this shape. So what is that? Trigonal planar, right? So first of all, what is our steric number? Three. What is our coordination number? Good, three. Is our molecular and electron geometry the same or different? Same. And it's trigonal planar. Okay. Now, as you're going through your notes, well, actually we can do this one this way. Okay. So now this one. So we have a few different things that we can talk about on this one. So let's start with the sulfur geometry. All right. What is the steric number on sulfur? Be really careful. Is it four or is it six? Four. Good. Right? Remember, we are not counting the number of bonds. We are counting the number of atoms that are directly bonded to it. So it's one, two, three, four. So the steric number and the coordination number are the same and they're four. So what geometry does that make? This one actually kind of matches with the prefix you're used to, right? Catch or hydral? Now is that electron geometry, molecular geometry, or both for the sulfur? It's both, right? There's no lone pair, so they're the same. Now I wrote sulfur geometry because I also want to talk about the oxygen's geometry. So I'm going to talk about this oxygen and this oxygen. They have the same geometry. Okay. These are just bonded to one thing so we don't talk about the geometry there. But what about this oxygen? What is the steric number? Good. We have the steric number's four, right? We have two lone pairs and two things bonded to it. Now what is the coordination number? Two. Because we have this sulfur that's bonded there and we have this hydrogen that's bonded there. So now we have an electron geometry and molecular geometry that are different, right? So what would the electron geometry be? Careful. Electron geometry is counting the electrons in. That's part of the geometry here. So it would be tetrahedral. Four things. Now what's the molecular geometry? Good. Bent. All right. Now remember for all of these, I haven't been writing in the bond angles for all of them but maybe we should do that for some to make sure that you were remembering to do that. So what would your bond angles on this sulfur be? 109.5. Good. Now what about the oxygens? It's tetrahedral electron geometry so is it 109.5? No. What is it? Good. Less than. All right. We're going to skip POCl3 so that one's the same as what we've been doing so I want to skip that one and go on. OK, Clf4 minus. So this one is one with an ion but we've already drawn the Lewis structure so once you get the Lewis structure drawn you don't have to worry about whether it's an ion or not too much. You're just basing it off the Lewis structure and we counted that in already. So we have a steric number of what? Six. And we have a coordination number of what? Four. All right. So steric number comes from one, two, three, four and then each lone pair is one so six. Coordination number comes from the fact that we have four things bonded to it. So what is our electron geometry? Six things bonded or excuse me, steric number of six. So octahedral. So we're again sitting with this sort of structure. So when we tear in two of those bonded atoms into lone pairs what happens? What does it become? Square planer. When you're doing these problems I would really suggest sitting there with like some toothpicks or at least you know if you don't want to carry them around with you ink pens whatever. I don't care if you bring toothpicks with you on the exam and are kind of playing around with them that doesn't bother me at all. Okay. Well, we'll end there for the Lewis structures. Don't put your Lewis structures away. We have one slide and then we're going to go back through them all and do something else with them. Getting a lot of run out of those few Lewis structures. Okay, so the next thing that we need to spend some time talking about is dipole moment. So this is going to be largely based on the video that I sent you guys. So hopefully you all watched it over the weekend otherwise you might be a little lost. So if you didn't go back and read it or watch it right there. I'll give you a second to get your work out. So your work should be out now. Alright. So with a dipole moment. This has to do with a vector addition of magnetic moments of polar bonds. Now what does that mean? That's sort of the technical definition for it. So if you've had a lot of math and a lot of physics you want to think of this as vector addition because that's what it is. If you haven't I'm going to give you a sort of more general feel for how it works and what it is. The idea behind this dipole moment has to do with that polarity that I talked about in the video and the difference in electronegativity. So when we go to look at these I had talked about the difference between a bond on electronegativity. Now when we had talked about it or rather you had watched it in a bond what did we say a polar bond was? You said if you have an atom that's very electronegative and one that's not very electronegative or at least isn't as electronegative as the other one that you would have a polar bond. So is something what's an electronegative element that you can think of? Maybe the most electronegative. Yeah, fluorine's a good one. And a non-electronegative one, maybe hydrogen? So if you had something like HF would that be a polar bond? Yeah. Now in a two atom molecule like HF if that bond is polar the atom is polar. Because what's happening there is the fluorine is taking all that electron density and it's pulling the electron density toward it. And that's making that bond polar. Now in a multi-bond atom or molecule where now you have lots and lots of different bonds now you have to look at the geometries a little bit. So you still have to go through and decide are there polar bonds? Once you decide if there's polar bonds then you have to actually see well how do these polar bonds add up? And that's what I mean by vector addition. But you can just sort of look at it in general and see if they cancel. If you have a bond going, an electronegative bond going that way and an electronegative bond going that way they're going to cancel each other out. This atom is going to be pulling the electron density that way. This atom is going to be pulling the electron density that way and they're going to completely cancel. Now if you have bonds that are in the same direction now this doesn't have to be the exact same direction. Obviously I can't have two bonds going exactly the same way to two different atoms. But if you have two bonds that are in the same general direction they're going to add up. So something like Hf that's a good example of a one bond where you just have the fluorine taking all the electron density to it. There's nothing to cancel it out. There's no other atoms that you have to worry about whether that particular bond is polar or not. And so you're just going to get a polar bond and a polar molecule. Now this is how we write this. So there's two different ways and you can pick one or the other. It doesn't really matter. Just so you don't have to write them both I did just to show you. But sometimes you'll see one written as a delta positive so a delta meaning just a partial positive it's not like it actually has a positive charge but it has a partial positive charge. And then a negative over here saying well this is a partial negative. The other way that you can write it is you put this little plus sign arrow on the positive side and you draw an arrow to the other side. Now what if we have something more complicated? So for instance CLF3. So I've sort of taken my structure apart. Let me put it back together real fast. Something like this is a little bit harder to see. First of all is a bond between a CL and an F polar? That's our first question. Are these going to be polar bonds? They're not going to be super polar but they'll be polar. Fluorine is more electronegative than chlorine. So there's going to be each of these fluorines will steal the electron density away from the chlorine. So now the question becomes is it a polar molecule? So the easiest way to look at it is what cancels and what adds together. So if this one's pulling up, this one's pulling down, are they going to add, cancel or do something else? Yeah? I do. Sorry, I made it more complicated than I needed to. We'll do that one next. So yeah, these will cancel. So for the sake of our little discussion on polarity, we can pretend they're not having any impact in here. Is this one going to have anything to cancel with? No. So is this going to have a dipole? Yeah. Let's build it back again. In which direction is the dipole going to be? That way. That way, yeah. We'll point. That way, right? Because this one and this one cancels. The fluorine is more electronegative. So your partial positive would be here, the plus sign of your arrow, and your delta negative would be over here. Okay? All right. So you can sort of just think of it like that without having to think of it as vector addition or if you really like the physics and the math aspect of it and you want to think of it as full vector addition, you can do that too. Okay. Now, let's think of some more examples. Let's look at XCF2Cl2. Now, yeah. So, if you use example, would this prevent you from holding the fluorine? No, it would be polar. So you can say it a couple of different ways. You can say have a dipole or is polar. Either way means the same thing. And it would have a dipole and it would be in the direction where everything doesn't cancel. So when I had it drawn like this, the top and the bottom would cancel out and so it would have just been in that direction. Yeah? Yeah, you would use your periodic trends. So your chapter one material didn't quite go away, unfortunately, for you guys. So you have to remember that. So yeah, you do refer to the periodic table. Yeah? Oh, good question. Let's go through that a minute. So what shape is this? So, best way to start when you're talking about this is to start thinking about the electron geometry and then move to the molecular geometry. Keeps you from making mistakes. So our electron geometry would have been like this. But two of those would have been replaced by lone pairs. So what shape is this? Let's turn it on its side. What shape is this? T-shaped. So having three things bonded to it doesn't make it trigonal planar because you have to pay attention to the lone pairs and the lone pairs make a difference. Now how did you know about the lone pairs there? Well you would have had to draw out the full Lewis structure, which I didn't necessarily do here, but you would draw out the full Lewis structure before you can ever decide on a geometry and therefore before you can ever decide on whether something is polar or not, you have to go back and you have to draw the Lewis structure. Because otherwise you might think, well, there's three things on it. There's three things bonded to it. Why wouldn't it be trigonal planar? And you might draw it like this. Now if it was shaped like this, would it have a dipole? No, right? All three cancel. They're all going in completely opposite directions and so they all cancel. So if you had a trigonal planar molecule you'd say, well no, it doesn't have a dipole, which is very different from a T-shaped where you would have a dipole. Make sense? So Lewis structures by far and away, if you can't draw a Lewis structure and you can't draw them properly, you're going to have a really rough time on this next exam. So you always start at the Lewis structure. That's why we've been going back to our pages. And you start with your Lewis structure and then you work your way all the way through. Good question though. OK. Anything else in this one? OK. So now this one. XEF2Cl2. So this is one we haven't drawn out yet. So when we go to draw this, there's a couple of different arrangements. We could have it like this or we can have it like this. OK. Now what are the differences between those? So look at them kind of closely. The one sort of arrow means that it's coming out at you. The other means that it's going into the board. So you need to picture these to be able to see what's going on. So I'm going to draw them out for you or build them for you rather. So what shape is this? What shape of electron geometry is it? Octahedral. Good. There's six things bonded to it. What shape is it? Electron jam or excuse me molecular geometry? Square planar. Yes. What does this stretch to the thick ones mean? So these ones, the thick ones, means that they're coming out at you and these means that they're going into the board at you or away from you, I guess. OK. So this is what we have for the one-juice thing. Now which one is this one? This one or this one? So there's a couple of different ways we can make this. One is to have them directly across from each other like this where the chlorines and the fluorines are directly across from the same atom and one is to put them next to each other like this. OK. Now, there's a difference on what these are on whether you're going to have a dipole or not. So if we look at this first one, we have the chlorines across from each other and the fluorines across from each other, right? So you'd have this. OK. Question then. Does this one have a dipole? Right. So this chlorine and this chlorine will directly cancel with each other and this fluorine and this fluorine would directly cancel with each other. And so you would get a direct cancellation of everything and you'd have no dipole. Now are one of these more polar than the other or excuse me, more electronegative than the other? Yeah. Which one? Which one's more electronegative? Fluorine. So let's say we do this. Now you think we're going to have a dipole now? Yeah. Because what will happen is sure the chlorine and this will cancel a little bit. All of these bonds are polar, right? A chlorine and a xenon is still going to be a polar bond, but it's not going to be quite as polar. Meanwhile, the xenon fluorine bond is going to be a lot more polar. And so while they'll cancel a little bit, these fluorines are going to overpower the chlorines and you're going to have a dipole. Now let's say I just, for the sake of consistency, let's say I hold it like this. Which way is the dipole? Straight up. Why isn't it like to this side or this side? The side to side cancel, right? So even though this is going to the left or right a little bit and that's going that way a little bit, the side to side part cancels out. But they're both going up. And so they're both going to pull the electron density up. And so your dipole is going to be straight up if I'm holding it this direction. So that arrow is supposed to be going into the board. All right. Yeah? So well this one will be polar. But if we drew it like this and put them directly across from each other, is that one going to be polar? No. Because this one and this one will directly cancel and this one and this one will directly cancel. How do you know which diagram to draw on? So this one, this isn't one I could give you on an exam. I would have to draw it for you or something like that. Yeah, this isn't one that I could just say XDF2Cl2 is it polar or not. You wouldn't know. Yeah? Do you want to try to draw the diagram or do you not want to try to do it? So we're not going to have to draw like this. I'm not going to ask you to draw it like that. You should know what it means. So if I were to draw these two and say which one is polar, you could circle it, something like that. But don't worry too much about having to actually draw them. So in order to dipole, we have to have a local mean. Correct. Having a dipole makes something polar. But if something has no dipole because everything would cancel, then they wouldn't be polar. But to continue, so one other thing, now if I were to say does this have polar bonds though, what's the answer to that? Yes. Right? It has polar bonds. It just doesn't have a dipole. So be careful with that. OK. So I'll give you a second to write that. So this sort of shows you how the geometry really matters. OK. Now sort of a lesson in drawing Lewis structures properly and what will happen if you don't. Let's take these two examples. Let's just say CLF4, sorry about that, CLF4. And CLF4 minus. So let's first look at this one. So how many, what's the steric number on this? Five. Five. So what's the electron geometry? We have five things. OK. Well, let's start this one from scratch then. So let's put the first two directly across from each other. Put this here. If you can't see these without building them, you really shouldn't feel bad at all. That's seeing in 3D is sort of one of those skills people can develop, but some people have it more than others to begin with. So we have five things bonded to it. So what is that? What does this look like if we put a triangle? We have a triangle base and we would have like a pyramid, right? And if we did that, we'd have two pyramids. So what shape would that be? Yeah. No, do that. Trigonal bipyramidal. OK. Now the question becomes, what's the molecular geometry? So I have to remove one of these atoms and put a lone pair there for our visualization purposes. Am I going to remove this one or that one? And what are this and that called? What are these called? Equatorial, right? You kind of think of it as a globe and the things going around the equator. And what are these called? Good, axial. So are we going to remove an equatorial one or an axial one? OK, let's look at bond angles. We're not in agreement yet. So what is the bond angle right here? We have 360 divided by 3, so it's 120. And what's the bond angle here? 90, right? 180 divided by 2, so it's 90. So are we going to want to give the electrons lots of room and give it 120 on each side or less room and only give it 90? Lots of room, right? Electrons get the most room. So we'll put it here. OK, so now we have our molecular geometry. And what would that be called? Seasaw. Yeah, tip it on the side if you forget, right? OK. So now, what do we think? Dipole or no dipole? Well, let's look at these first for a second. You think these are going to contribute to the dipole at all? No, they cancel, right? This one's going straight, almost straight up. This one's going almost straight down. So what about these two? So I'm going to hold it just like this. What way would the dipole be then? Directly that way, right? Because this part that's coming out at you and going into the board, those parts are going to cancel. Meanwhile, since they're both going that direction, they're going to go that way. OK? So does it have polar bonds first? We'll start there. Yes. Does it have a dipole? Yeah. So is it a polar molecule? Yeah. OK, next one. CLF4 minus. Now that gives us an extra two electrons, which means that now we have a different steric number altogether, right? What's our steric number now? Six. So what's our molecular, or excuse me, let's start with electron. What's our electron geometry? Octahedral. So we have this shape. So that's for our electron geometry. Now we have two lone pairs, so does it matter which one I remove first? Are the bond angles all the same or different? They're all the same, right? They're all 90. So it doesn't matter which one I remove to begin with. So now that I've removed that one, which one do I have to remove for the second one? Top one. So now I have a lone pair here and a lone pair here. So what shape do I have? Square planar, right? It's a square and it's all in one plane. All right, so what do we think? Dipole or no? No, because this one and this one cancel and this one and this one cancel. So even though we have polar bonds, we're not going to have a dipole. If you had the most electronegative element in the center, it would, but in general that's not how molecules get set up. So for the most part, your least electronegative one is going to be in the center. And so all the ones on the outside will be more and it'll pull it that direction. But I mean, theoretically, if you had a molecule that was the most electronegative in the center, then yeah. Yeah? So the whole molecule is not the same as the non-pollutant one? For this one, yes. And I do determine the difference in whether a molecule is polar or not like an individual molecule. What do you mean individual molecule? It means more the bond. So for the bond, that's where you're looking at the difference in electronegativity. So if there's a difference in electronegativity, then that's going to be polar unless it's so big that it's ionic. Anything else? Yeah? So if they went all the way, then you have to show it's like two different distances. Yeah. So if we wanted to switch out two of these for something that didn't have the same electronegativity level, then we would be back to our xenon example where we had like fluorine and chlorine. And if this was something that wasn't as electronegative, say hydrogen or something like that, then we would have a dipole. So we have octahedral square planar, and in that case, no dipole because they all cancel. Now because I know it's been like five minutes since we've seen those Lewis structures we did and you've been missing them. Back to the Lewis structures. So we have one spoiler up there, but that's all right. So let's look at nitrogen. So do we think that N2 would have a dipole? Does it have any polar bonds? No, right? Nitrogen and nitrogen. They have the exact same electronegativity because they're the exact same atom. So your first step is always, well, do they have polar bonds? And they don't. So nitrogen isn't going to have a dipole. Now what do we think about this one? N and O. So that was our big resonance structure, one that, or the structure that we drew out with three different structures and determined the best one, that this was the best one. So first of all, do we have any polar bonds? Yeah. Yeah, which one? The end of the O, right? Sure they're next to each other, but they still have a difference in electronegativity. And so it's going to have a dipole. Which direction? Toward what atom then? Yeah. Okay, well double spoiler on that one. All right, let's go to this one. So we have a dipole, that which we now know, right? Where's our polar bond? Which one? Or our main polar bond here? The carbon to oxygen. So which way is our dipole going to be? It's going to be completely up. And just so that you have the other way of writing it as well. Okay. Now what about this one? So what's the shape of that one? What's the electron geometry? Octahedral. Octahedral, right? That's what I've been doing a lot of making of today. And then so what's the molecular geometry? Now that we've taken two of those atoms away and put on lone pairs, square planar. And they're all the same atom that are bound, so they all have the same electronegativity. So first of all, is there polar bonds? All of them, right? It doesn't have a dipole. No, they all cancel, right? It's this kind of same example where we have them all directly across from each other. And the two that are this way cancel and the two that are this way cancel. So anything where we don't cancel out all of the dipole or all of the polar bonds, they aren't directly across from each other, those ones have a dipole. The other ones don't. Yeah. Can you talk a little bit louder? I can't hear you. How do the H's? So you actually for the most part can ignore the H's here because the difference between carbon and hydrogen electronegativity-wise is really, really tiny. I think if you were to look at the numbers, I think it's like 0.3 or something like that. Or the difference between carbon and oxygen is much higher, and so you don't really have to worry about the carbon-hydrogen issue at all because it's such a small difference as compared to the carbon-oxygen, which is a really big difference. Yeah. With the bottom right exactly, or the polar bonds? So you still want us to write arrows? No, so I was just doing that mostly to show you where the polar bonds were that you had to worry about. Yeah, so if I were to ask you if this has a dipole, I wouldn't want you to, I would just want you to say no dipole because otherwise it looks like you're trying to make four dipoles. So this one just kind of shows you where the polar bonds are, but good question. Yeah. Can you talk a little bit about the distribution of CO2 and all of this? I think about it completely. Yeah. And hydrogen is a little, a little, hydrogen is actually sort of an exception to the trend on some level, so that's why it's actually pretty close to carbon is because it's an exception because it only has that one electron. So it's helpful, if you want to look at the carbon hydrogen issue, it's helpful to actually pull out one of the slides on electronegativity and look and see that they're pretty much identical. Just to tad off. Yeah. Do you mean there was like two kinds of electronegativity that's why carbon is like there's a pole. You mean like if there was like three O's here? Yeah, but like there's other kind of hydrogen. So, so with the, the, yeah, so let's say it was three things of all equal electron negativity. So we'll, we'll not make up a real example off the top of my head in case I, you know, don't pull the right one, but something like this, right? So it's, what shape is this? What, okay, so I like more consensus. You're right, but more consensus. So what shape do we have first? What, what like triangle, right? And is it on a plane? Is it a pyramid? What is it? It's a plane. So it's trigonal planar. So if these all had the same, if each of these bonds were polar and they were all polar equally, right? They were all the same atom. Would this have a dipole? No, they would all cancel. Now let's say that like, we had two things that were kind of polar and one thing that was super polar, then would it have a dipole? It kind of depends really. So if, if these were just a little bit of a negative, but a, or a little bit polar, but you know, a fair amount, maybe something like a chlorine and then something, this was something that was, you know, more very electronegative, like a fluorine. It would be really tough to tell. You'd actually have to do the bond addition, like the vector addition, and I wouldn't give you that. It would be something really obvious, like a carbon and hydrogen with an oxygen or something like that. Yeah. Oh, and in my, my fake example here, it's, it's hard to tell because these two would be adding that direction and this one would be adding that direction. So these two are also sort of canceling with each other, but they're also adding that way. So there's sort of a lot going on that becomes a lot harder to tell. You know what I mean? That's when you have to actually do some vector addition and, and actually add up the, the dipole, the, the, the device, the differences between the two and then add them. You mean going down a periodic table? So what do we think? Going down a periodic table, what happens to the electronegativity in general? Gets bigger or smaller going down the periodic table. It gets smaller, right? Okay. And anything else? All right. Next ones. So what about these? So let's start here. SF6. Dipole? Yes? No? No, right? They all cancel. They're all going in different directions. They all cancel. Um, all right. Next one. BH3. Dipole? No. They're all going in opposite directions and there's not, there's not much of a difference in, um, electronegativity there anyways. All right. Next one. So this one, this actually falls into the category of ones I probably wouldn't ask you for the exact reason that I was talking before. So if you look it up, I looked it up at some point and it happens to have one, but this goes on the example of things I wouldn't ask you, okay? So you can actually just write that down if you want. Because what happens is, is these are all pointing in one direction. We have a tetrahedral geometry. And that's pointing in the opposite. And so it becomes a little tough to tell. So this kind of goes in the category of things that you'd actually have to do all the vector addition for. Okay. Now, to kind of switch gears a bit, but not too much. I want to talk a moment about greenhouse gases. Because now we know some stuff that we can talk about greenhouse gases with. So I would just, I hope everyone has heard of these. These are kind of responsible for what important thing going on right now. A little warming, right? So you know a lot of greenhouse gases, right? You know of ones that are talked about, such as, yeah, CO2, things of that sort. But what makes something a greenhouse gas? Well, what happens is you get either a permanent dipole or what we call an induced dipole, which I haven't really talked about yet. So we already talked about a permanent dipole, right? Difference in electronegativity and they don't, you know, cancel each other out. Now we have something like an induced dipole. So what an induced dipole is. So a great example of this would be, well, let's actually just go through some examples and I'll show you one. So let's go through those and see which ones actually are going to have, be a greenhouse gas and which ones aren't. So let's look at N2. So first of all, does it have a permanent dipole? No. Is there any way that we can make some sort of dipole there by bending it or stretching it or doing something like that? You know, you can't, right? There's only two, there's only two atoms. You can't change anything about that. So is N2 going to be a greenhouse gas? No. What about NO2? Yeah. So we have this sort of shape going on with N in the center. So we have polar bonds going this way, polar bonds going that way. So which, if I'm holding it like this, which way would the dipole be? Down. Just, you know, since they are the same thing. Does that double bond matter? No, right? Because it isn't really a double bond. No, right? Why? What's the word here you're looking for? That's right? Really that double bond is sort of split between here and here. We say those electrons are delocalized, right? So they're kind of going in between. So yeah, this one's going to be a greenhouse gas. It has a dipole. What about oxygen? Does that one have a dipole? Can we make a dipole out of it? So that one's not going to be. What about CO? Yeah. That's carbon monoxide, right? That one has a dipole as well. We have a difference in electronegativity. All right. N2O. So we sort of talked about this one already. We have a dipole, right? Which direction is it going again? Yeah, toward the oxygen. Okay. Now there's one I didn't put up here. What's another greenhouse gas you know of? CO2. What's CO2 shaped like? It's linear, right? If you don't believe me, draw it. It's a carbon with a double bond to one oxygen and a double bond to the other oxygen. No lone pairs. So it's linear. So is CO2 a greenhouse gas? Just use general knowledge right now. In your life, is CO2 a greenhouse gas? Yes. Okay. Now I told you it doesn't have a dipole. No. It does not have a permanent dipole. Now how is it a greenhouse gas then? Yeah. You can induce it. Yeah, you can induce it. Good. So what if I take this and I bend it like that? Would it have a dipole? It would be going which direction? Oh, what if I bent it like that? Yeah, then it would be down, right? So you can make a dipole in it by bending the molecule. You can't do that with something like N2 or O2 or any sort of just two atoms bonded together. But you can do that when you have this central atom that you can bend. So that CO2 is a greenhouse gas because you can induce a dipole. Yeah? Is that because you change the bond? So it's not actually that you're changing the bond. It's that you're just sort of stretching it with electromagnetic radiation. So you can just sort of bend this by putting energy into it. And you can also stretch the bonds. The only way to actually do the stretching part is with my hands. So if my body is carbon and these are oxygen, you can kind of stretch it like that. And then one dipole would be different from the other. Or you can bend it. Yeah? Well, any time you have a bond angle, you can bend it at that angle. Okay. So in the last two minutes, don't pack up on me yet. All right. I want to talk about the fact that we have these two different theories of bonding. And we have one that's valence bond theory. And we'll talk about that as hybridization. And then we have something called MO theory. So what we'll start with next time is the valence bond theory and the hybridization. And then we'll go on to talk about MO theory. They both really have their benefits and their flaws. Something like hybridization is really quick and simple to look at. And we can look at really large, complicated molecules. And yet you can still go through them very quickly. MO theory is a little bit better at predicting things. But it's very, very complicated. So with MO theory, I'm going to sort of say, hey, the computer said this and show you some pictures and explain to you why. But MO theory becomes very complex very quickly. We'll only make it up to second row diatomics in this class with MO theory. The valence bond theory, we'll start looking at large drug molecules fairly quickly because it just expands out really nicely. So those are sort of your two different theories and why you would use each one. But you should kind of keep in mind that they are separate to some extent. That you can use the one or you can use the other. You could use both of them. But they are very different. And again, if you didn't watch that video and some of today's lectures seemed a little foreign, please go back and watch that.
Chem 1A is the first quarter of General Chemistry and covers the following topics: atomic structure; general properties of the elements; covalent, ionic, and metallic bonding; intermolecular forces; mass relationships. Index of Topics: 0:00:12 VSEPR Geometry 0:10:11 Dipole Moment 0:19:35 XeF3Cl2 0:25:18 Two Different Lewis Structures 0:41:22 Greenhouse Gases 0:46:32 Two Theories of Bonding
10.5446/18974 (DOI)
Okay, so last time we met we had been talking through Lewis structures that break the octet rules. It's just a quick reminder on all the rules that we're following as we're going through this lecture. You can't have more than eight electrons in your second period elements. And of course your first two, right, hydrogen and helium can only have two. But when you get to that second period, you cannot have more than two. You're allowed to have less than two. It doesn't happen a lot, but it happens a lot with things like beryllium and boron. But you know, whatever you do, no more than eight electrons. So you can't put five bonds on nitrogen or carbon or anything like that. And your third period elements, we talked about when you get down there, now you have all these d orbitals that you're allowed to use. And you're allowed to use those in order to form bonds that have more than eight electrons total. And you're allowed to give them 10 or 12 electrons if need be. And we had started working through some Lewis structures that had that and showed us that. So you're going to have some commonly used ones that you want to get used to seeing. Ones that will have less than eight. A lot of times beryllium will just form two bonds, leaving it with four electrons. Aluminum will a lot of times do three, leaving just six valence electrons. And boron does the same thing. And will oftentimes just have three. So back to our Lewis structures that we've been kind of working through in class last time. So we'd already done a bunch and this is where we left off with POCl3. And so remember our first step whenever we do these Lewis structures is to count up all of our electrons. So we need to go through and we need to say, well, we have five from the phosphorus, six from the oxygen, seven from each chlorine. So that at the end of all of this, we know exactly how many we have to deal with. Okay, so when we go through and we put this in, we start with our central element. In this case, most of the time it's going to be the first one written in something of this sort. But sometimes it won't be. It's also usually the least electronegative element. So in this case that follows. So we put our phosphorus here and we put all of our other units around it. Now if you go through and you put in all your electrons and you form all of your octets, you'll find something interesting that we didn't find with the other ones. We have a little bit of a choice. So we could do this. And if we did this, everything has an octet. So everything is technically happy. But if we go a little bit further with this now, remember we also talked about formal charges. And we were, I had told you to go home and practice doing the formal charges with the other Lewis structures that we drew. But now we need to actually do it with this one because you're going to find something interesting. So let's go through and do that. For our oxygen, what you'll see is that you have eight valence electrons from the periodic table. But here it owns seven. One, two, three, four, five, six, and then one from the bond. So that's a negative one. And if you do it for phosphorus, you'll see that phosphorus has five from the periodic table. And here it only owns four. One, two, three, four. So it has a plus one. So now we have a minus one and a plus one charge. They're sitting right next to each other. And we can do away with those charges. So you're always looking to minimize formal charges when you draw Lewis structures. Remember in our NNO example, we wanted to pick the Lewis structure with the lowest formal charges. We want to do the same things here. So this actually isn't going to be correct. What you need to do is you need to form a double bond to that oxygen and then give everything else the octets like you did before. And now what you'll see is that everything has an octet except phosphorus. Phosphorus has more than an octet, but that's OK. But now all of your formal charges are equal to zero. And so that's more stable than having this structure where you have formal charges. So this would be your correct Lewis structure. OK. A few more to work through. Now we have CLF4 minus. So moving on to doing some ions. So same rules as always, first step, count up all your electrons. So we have seven from the chlorine plus seven from each fluorine. And then don't forget that you have to add one more because of that, right? We have a negative charge, so that's one more electron added in. So we get 36 electrons total. We put our least electron negative one, which is normally listed first in our center. So we put our chlorine in our center. And we put everything else around it. Now if you fill in all your octets, this is one of those places where having counted your electrons first will keep you from making a silly mistake. So if we fill all these in, now everything has an octet. But if you just walk away now, you wouldn't realize that you were missing four electrons. You really only have 32 here, but you had 36. So you have to remember to put those in. So now we have to decide, well, where do they go? They can't go on the fluorine. Fluorines, all of our fluorines already have eight. They had eight just by making that one bond. We can't give fluorine more than eight. So it can't be that one. So we have to put it on the chlorine. And so we'll put it in like that and put both lone pairs on the chlorine. OK, so now we'll move on and we'll talk about some other situation that occurs when you have Lewis structures called resonance. So what happens with resonance is that a molecule can have more than one Lewis structure and they're all equally stable. They're all allowed. There's no reason why it should be one over top of the other. There's not like when we were talking about our N and O example, and we had three different sort of unequal resonance structures you could call them. But there's definitely one that was most stable. This is something a little bit different. This is saying that they're all equally stable. They're all just as likely to happen. So this happens when you can move around electrons, not molecules. So whenever you're drawing resonance structures, if you have to take hydrogen off somewhere and put it someplace else, that's no longer really a resonance structure. You have to just be able to move electrons around, which means you can move around lone pairs and you can move around bonds and that's it. Now in reality, we're going to draw three Lewis structures, let's say, to three different resonance structures to make up this Lewis structure. And technically you do have to have all three for it to be correct. If I tell you to draw the resonance, or if I tell you to draw the Lewis structure for NO3 and you only draw me one structure, technically you're wrong unless I specifically ask you to just draw me one. The reason for that is that in reality, just one of those isn't a real accurate representation of what's happening. You have to have all three of them because in reality it's a mixture of all three. So we're going to go through these three examples so that we can kind of look at the different ways that this can be done. OK. So the first one we have is NO3. So you're going to go through this exactly like we did before. There's really no difference to the start of it. So we add up our electrons. And this should actually be NO3 minus. OK. So when you go through and you write all of this out, you're going to put your nitrogen in your center and you're going to put your oxygens around. Now if you go to do this and you fill in all of your electrons exactly as we did before, what you're going to find is that you are short to electrons. And whenever you're short to electrons, that means that you need to go ahead and make a double bond. So we'll make our double bond. We'll just put it there for now. Now let's fill in all of our octets. And so we can do that. Now a good question might come up and say, well, let's look at our formal charges a second. And if we look at our nitrogen, this is where we want to start getting kind of good at formal charges. And we don't want to have to necessarily write them down all the time. So we're just going to do it out loud. So if we look at nitrogen, we say our valence electrons, we look at a periodic table, we have five. And here we only have four, one, two, three, four. So we have a plus one charge here. Now if we look at these oxygens, we would have six from the periodic table, but each one of them has seven. So each one of these oxygens is going to have a negative one charge. So now if you think back to the example that we just did just one moment ago with the PO-CL3, and we made a double bond to fix that. And I said, well, you know, hey, we have a negative one and a plus one right next to each other. You can fix that by making another double bond. Can we do that here? Well, if you go ahead and you draw it out, what you're going to find is that now nitrogen has 10 electrons, and nitrogen is not allowed to do that. So in this is a case where you can't minimize formal charges. You're stuck with what you have. So we're going to just leave it there. Now to get into this idea of resonance structures, though, why did I put the double bond here? Why couldn't I have taken and drawn it like this, and instead put the double bond at the bottom left and put our two negative charges here? Or for that matter, why couldn't I have done it over here? And when it comes down to it, there's really no reason I picked the one I did. And in reality, all these are completely equal. And so to accurately represent this molecule, you have to actually draw all of them out. You can't just leave it as one or the other. And so if I were to ask you to draw this on a midterm, technically you would have to draw all three unless I just asked you to draw one. Now going back to what I had mentioned in the slide where all of these are equal, there's no reason why it should have been put in one or the other. So it's not like our N and O example where there was a distinct preferred choice, where we put the negative on the most electronegative element. In this case, we're just putting them on different oxygen. So they're all equally stable. We didn't move anything around. We didn't move atoms around. We didn't pick up an atom and detach it someplace else. We just moved the double bonds and the lone pairs around. So we were only moving electrons. So all of these are equally stable. All of these are allowed. And so there's no difference there. OK. So moving on to another very similar one. We have HCO2 minus. So when we go to draw this one, we count up our electrons. And we put our least electronegative element in the center. And we draw everything off of it. So we have two oxygens and a hydrogen. And so we draw it this way. A little bit of a spoiler there on that double bond. But if you go through and you draw just the single bonds, you'll see that you're short in electron or two electrons. So you need to make a double bond. So I chose to put it here. But there is no reason why I couldn't have chose to put it down here. So let's draw this out. And this puts the minus charge on here. So now let's draw the resonance structure. So this is where you say, well, why did I put it here as opposed to here? There's no reason. It's not that it makes it more or less stable. It's just we kind of had to pick one of them. So we can draw the other one. And so both of those are equally stable. So both of those need to be there to be completely accurate. Now let's think about what I said about having them actually be a mix of the electron, or the mix of the structures. So what I mean by that is that not one of these accurately represents it. What in reality is happening is this minus charge is being shared equally amongst all of them. This double bond is being shared equally amongst all of them. And so it's in reality a mix of each of these. If we were to measure this bond length, it wouldn't be that two of them came out to have a single bond. And one of them came out to be a double bond. And if we measure it really, really fast, we'd see that they were all switching. No, it wouldn't be doing that. It would actually be in the middle of all of them. And so if we go back and we look at our next slide, we can see some a little bit more of what's going on when we can look at the shapes. So if we look at this as sort of our NO3, and I have all three resonance structures drawn out here, and we look at this double bond moving around, what actually ends up happening is that all of them contribute equally, and you end up with sort of, there's different ways you can draw it, but one of the ways that they'll sometimes draw it is with like a dotted line going across here. And what that means is that these electrons are delocalized. They don't belong to any one specific place. What happens is that all those p orbitals, so if we look at this and we think about where our p orbitals are and what they look like, they're going to bond here and here and here. And as we move on in chapter two and chapter three, you'll have a better idea of exactly how those geometries work. But for right now, you can just think about the p orbitals being spread out over the whole thing. And so those are called delocalized. So electrons that are in these sort of areas where you have this resonance are called delocalized electrons. And we'll revisit that off and on as we start learning more about how the shapes look and how the geometries look and everything of that sort. So to sort of also add to something else that helps with homework and things of that sort, when we look at these, I talked about sort of the bond as being split between the three. So if we were to measure these bond lengths, what you would actually come out with, a bond length of about four thirds, which makes sense because we have one, two, three, four bonds being split between three different sections, three different atoms. And so that's four bonds divided by three is four thirds. And the same thing would happen if you were to be able to measure the charges. How much of a negative charge those oxygens have. You would have one, two, three, or excuse me, one, two negatives split between three atoms. And so you'd come out with about a negative two thirds charge. So that comes up off and on in your homework. Okay, now we're going to move on to something that's a little bit off subject and is not technically covered in this chapter of your book. But it's something that people forget to teach you guys until it's really too late and we're already using them. This comes up a lot in organic chemistry and you'll get really, really good at them in organic chemistry. But you need to have a little bit of an idea of how to do them and how to work with them before you get there. So that when you see them in a book, you know what's going on. And it never gets formally taught many places here. So I chose to put it in here with all of this bonding when we're really getting into the structures and what molecules looks like and things of that sort. So these are going to be called line structures. So here I have the rules written out for you, but really the only way to get good at this is to do lots of practice. So keep this sort of by you as we're doing these so you can follow along with the rules. But the reason why we want to have these line structures is because it makes it really fast for us to be able to draw them. So right now we've been working with pretty small molecules, you know, four or five different atoms all combined together. But when you start, you know, looking at these big, let's say different types of drugs, prescription drugs, things of that. So right molecules, now you're getting these really huge molecules. And if you have to draw every single carbon and every single hydrogen and every single atom in that molecule, it starts to get a little bit tedious. If you, you know, in your bio classes, you've probably seen different hormones and things of that sort. And you can imagine having to draw those out in its full glory Lewis structure as we've been doing and how much room that would take up and how tiring that would get. Line structure says, hey, we know some things about molecules, especially organic ones. We know how these things act. We don't need to draw everything in. So whenever you see a corner or you see an end of a line, so we'll skip ahead a bit into this, just so I can show you. So a corner or just an end of a line, that's going to be a carbon. So you just go ahead and assume anytime you have a little corner, that's a carbon. Anytime you come to the end of a line and there's not some other atom written there, that's a carbon too. So you assume that carbon is going to have four bonds. So if carbon has four bonds already, and you can see all of those four bonds drawn in, you don't really have to do anything with that carbon. That's just a carbon. Now, if you have something like this, where look at this carbon, it only has one, two. Okay. Well, that means that it has two more bonds somewhere. Carbon likes to have four bonds. So hydrogens, you can assume that they're there. We just don't draw them in. So this carbon, because it has two bonds here and it needs two more, this carbon must have two hydrogens attached that we just aren't able to see at the moment. And that saves us a lot of time when we're drawing these out. Okay. So then all your other atoms, those have to be drawn in. And any hydrogens that are on those atoms, those have to be drawn in too. So really, your only things that you're watching out for here is carbons that are corners or end of a line, and all the hydrogens that are attached to the carbon. There's one other big thing that comes in when we start talking about hybridization. We don't draw lone pairs. So in Lewis structures, we always had to draw out all our lone pairs. So we always knew as soon as we looked at an atom, how many lone pairs it had. In line structures, we don't. And that's going to come into play a lot when we start doing VSEPR theory and hybridization and things of that sort. When we care about what those lone pairs are and whether they're there or not. In line structures, we don't draw them. So first, let's practice drawing some. And then we'll go on to a few more complicated ones that I wouldn't expect you to be able to draw. I couldn't do it this way and give you this and have you draw it, or I couldn't just give you the formula and have you draw it. But I would expect you to be able to tell me the empirical formula. But we'll start with this direction. Okay. So here we have a Maldra and this way I can point to them a bit. So if we start with this one, when we go to draw these, we put our pencil down and we say, well, that's our first carbon. Now, so we just put that now. Now we go up. So now we have one, two carbons. So that takes care of those two. We go down one more time. So that's one, two, three carbons. And we have one more to go. So we do that. So now we have one, two, three, four carbons and we have one, two, three, four carbons. So now we have to worry about our NH2. So we draw a line to go to our NH2. And we draw those in. We don't have to draw on any of the hydrogens. Those are all assumed to be there because this carbon only has one bond. So we have to assume that it has three other ones that we just can't see. So that's one, two, three hydrogens. And here, one, two hydrogens. Here, one, two hydrogens. One, two hydrogens. Now notice on the nitrogen, we have to draw those hydrogens in. You can't assume that the hydrogens aren't a Hatterl atom or anything other than carbon. So we have to go ahead and write that in. So this is how we go about drawing a line structure. Notice this nitrogen, it also has a lone pair, right? That lone pair that we just aren't seeing. So we don't draw that when we draw the line structures. But you do need to know that it's there. Okay, let's do the next one. So we have two carbons and then that's bonded to an oxygen. So we have one, two carbons. One, two carbons. And we come down. And we come down to the OH like that. And you don't have to draw a bond in between the oxygen and the hydrogen. You can just write OH. So again, one, two carbons. One, two carbons. And then down to the O. And then you assume that because this has one bond here, it must have three hydrogens. This has two bonds, so it must have two hydrogens. Now moving on to a little bit more complicated one. Now we have this sort of pentagon structure. So since a carbon is a corner, we just have to draw our pentagon. And then all of those hydrogens we can assume to be there. And so we only have to draw in our oxygen or double bonded oxygen. So you can see how this is a lot easier to draw than that. And how once you get really good at these, you'll look at these and say, okay, well, two bonds there, that means that I have two hydrogens there. Two bonds here, that means I have two hydrogens. And so on and so forth. So much easier once you get good at it. Okay, now this one. So we have three carbons. So we can go one, two, three. So that's one, two, three. One, two, three. Now on that last carbon, we have two oxygens. One of them is going to be double bonded. One of them is going to be single bonded. One of them is going to be single bonded. And we're done. So again, to look at this and say, okay, we have one bond here. That means that we have to have three hydrogens, two bonds here. So that means we have to have two more hydrogens, two hydrogens. And then this carbon doesn't have any hydrogens on it because it already has one, two, three, four bonds. So here we have one, two, three, four bonds. And if you looked at this, you would say, okay, well, I don't have to worry about hydrogens there. So that's going in that direction. Now let's go in the opposite direction. Now let's look at this one and say, okay, well, how can we figure out the empirical formula for a structure that looks like this? So first we need to start with something. Usually it's a good idea to start with carbons. Until you get good at these, it's also a good idea to go ahead and draw everything in. But while you're drawing in hydrogens, I would draw in your lone pairs too. So if we start with all of our corners, we have a carbon right here, here, here, here, here, and here. One right there, there. Now this one is drawn in. This is one of those, sometimes you'll see them draw in these CH3 groups, sometimes you won't. So I could have just as easily left that blank and leave it as just an end of a line. And you would have to know to put in the CH3 groups. It's drawn both ways. And then over here is CH3 groups. So you count all of them up and you have one, two, three, four, five, six, seven, eight, nine, ten, eleven carbons. So you know that it's going to be C11. Now the hydrogens are a bit more tricky. So if we look at this one, we have two bonds already. And so we're going to need two more bonds. Here we have one, two, three, four bonds. So we're all set there. We're not going to have any hydrogens there. We look here and this is the same thing. One, two, three, four bonds. So no hydrogens there. So we still just have these two. Now we come up here and we see that we have three bonds, which means that we must have one other one that we can't see. So that's a hydrogen. So two hydrogens, one hydrogen. Same thing here. One, two, three. So we must have another hydrogen. Same thing here. Now we look at this one and it has four bonds. And so we know that, okay, we're all set. So no hydrogens there. So up to two, three, four, five. Two off here gives us six and seven. And then three bonds here means that there must be one more. So we're up to eight. And then nine, 10, 11, 12, 13, 14, 15. So we're up to 15 hydrogens. So we have C11, H15. Now for the other atoms, you just go ahead and they're all drawn out. They're all drawn out for you. So you just have to write them in. So N02. So you have C11, H15, N02. And going through and drawing them out the first few times is fine. This is more than recommended. Now let's look at this one. So let's look at our carbons. So we know each corner here is going to be a carbon. So that gives us our six. And then this one here is seven, eight. So we have eight carbons total. Now we go through and we do our hydrogens. The more difficult one of the group. So we have one here and one here. We look here and here, and you can see for both of these, they have four bonds already. So no hydrogen. We look here, we only have three bonds. So there's going to be a hydrogen there. Same thing for this one and same thing for this one. They all only have three bonds. They all need a hydrogen. So we're up to one, two, three, four, five. We look at this one. There's four bonds that are all set. This one, there's two bonds. So we need two more. So six, seven. Here we have two bonds. So we need two more, eight, nine. And here we have the martyra in out. So 10, 11. So we have C8, H11 so far. Now we just go through and count up all our other ones. So we have a nitrogen and we have two oxygens. So NO2. So we get C8, H11, NO2. OK. Now let's look at sort of an interesting structure just because it's, and again, this is a little, it goes along with the other things that we've been talking about, but this gets into the P orbital delocalization too. And it's a good place to kind of put it in now that we see what these structures look like. So this takes, this brings in these resonance structures that we've been drawing in with the line structures. Line structures aren't any different. You can draw resonance structures with them just like you can with structures. So if we look at this structure, this is benzene. And that's just the name that you would need to know for it. So if you look at each one of these, each one of these corners would be a carbon. And then each one would have a hydrogen off. Now there is no reason in this case to draw benzene with this, the double bonds here, here, and here. You can equally draw them here, here, and here. And so that means that we have a resonance structure between these two where we can draw the double bonds alternating this way or the double bonds alternating that way. Here's it drawn out with the hydrogens just so that you can see that each one would have to have a hydrogen. Now one of the other ways that you'll see this drawn, because these are all delocalized and they are all shared equally, remember a resonance structure really means that it's somewhere in between, that each one of these would be like one and a half bonds. And so sometimes you'll also see it just drawn like this, with a circle. So benzene comes in lots of different applications, and it's something to be aware of and something to know exists. You'll see these groups lots of different places. And when you get further into organic, you'll start calling them aromatics and things of that sort. So in this case, this gives you a little bit better picture of how P orbital delocalization works. In this case, it's over the entire ring. So these right here are meant to represent your P orbitals. So one lobe of the P orbital, the other lobe of the P orbital. And those P orbitals are forming those double bonds, that second bond to each one. And so you get this big ring of orbital, and that's how this looks. So this is sort of a resonance structure that shows you the P orbital delocalization over the entire thing. And you can now kind of imagine the NO3 looking the exact same way. OK. Now we're going to do a little bit of more talking about electronegativity. We've sort of already talked about the trends before. Now we're going to get into it all a little bit more and talk about polar bonds and things of that sort, a little bit more intensely than we did before. Before we just sort of talked about it as this tendency of an electron to share or steal, I guess, to take more of the electron density away from its bonding partner. And that was really all we talked about. We talked about the trends, right? So we talked about as you go across the periodic table this way, we are increasing our electronegativity, something like fluorine and oxygen and nitrogen are extremely electronegative. Things over here aren't. We talked about how as we go up the periodic table we have increased electronegativity, which means that fluorine's very electronegative, things down here are not. So now we're going to talk a little bit about more how this applies to actual chemistry. So remember that in this case there's not a lot of exceptions, right? That was one of the nice things about electronegativity. We took our exam on that, is that you didn't have to worry about all the exceptions. And that's because they already have a stable octet, or if they're breaking the octet, they're in a stable electron configuration. So because of that we got this nice kind of smooth slope all the way up following the trend, with just a few exceptions that we didn't worry about too much. Now let's talk about how this electronegativity and how the differences in the electronegativity between two elements that are bonded to each other are going to affect properties of a molecule. So polarity is something that you may or may not have heard of. And this occurs, a polar molecule or a polar bond occurs when two elements that have very different electronegativities are bonded to each other. So if we look back here, if we have something on this side of the periodic table bonded to something over here where there's a difference in electronegativity, these elements, they're going to be stealing all the electron density toward themselves. So if an element has more electron density being pulled toward it, it's going to have a positive charge or a negative charge. Well, more electron density, electrons are negatively charged, so it's going to be a bit more negative than positive. So if you take something like HF or HLI and you look at their electronegativities, you can see that there's a difference between them. And because of that, the electron density isn't shared equally. In this case, hydrogen is more electronegative, so it steals it away from the lithium, giving it an unequal distribution. In something like this, you have this hydrogen fluorine and it's sharing it unequally. And so now the fluorine has a little bit of a negative charge. And something like I3 would be nonpolar, right? Iodine is just iodine. They're all going to have the same electronegativity. They're all being, you know, spaced out equally. So they're all sharing it completely equally. There's no difference in electronegativity here. But here and here there would be. Now, this gets into the idea that there's a gradient between this, right? If we share something here and here, let's say, okay, well, that's a polar and it's not sharing equally. This would steal more of the electron density than this. But it's not quite the same thing as if you take something like this and bond it with something over here. It has a very, very low electronegativity. So there's a difference in the differences in electronegativity, right? We have small differences and big differences. And so that's what this is getting into. If there's just a ridiculously tiny amount of difference in electronegativity or they're exactly the same, it's not going to be a polar bond. If you have a molecule, two molecules, excuse me, or two atoms bonded together, and their difference is a little, is less than two. So there is a difference, but it's not that big of a difference. Then it's going to be a polar covalent bond. Now, if it becomes greater than two, that's when we get into the fact that now it's basically stealing a whole electron or more worth of electron density. And that becomes an ionic bond. So this fixes our original definition of the difference between ionic and polar covalent, or excuse me, ionic and covalent. Before we had talked about ionic being a metal and a nonmetal, right? Metal, nonmetal. And we talked about covalent being all nonmetals. And now we can kind of see why. If you have things over here, they're all about, you know, they're relatively similar in electronegativity. They're not hugely different. And so they're going to be covalent. Maybe they're polar covalent, but they're still going to be covalent. If you take something from this side of the periodic table and something from this side of the periodic table, now there's a huge difference between the two. And so now that's stealing so much electron density over to this side that you're getting more of an ionic situation. So if we put that in a little bit more, you know, concrete terms, the electrons that have that nucleus pull on the electrons far farther away are electronegativity, or electronegative. So something like fluorine taking something from potassium. It's such a large difference that it's now going to take basically a whole electrons worth. This is how we can decide on something called ionic character. So now it's not as simple as just covalent or ionic. Now we have this gradient that I've sort of referred to off and on through the quarter. So if we look at something like KCl and Ki, and we want to know, well, which one has more ionic character? We can look at the K and we can look at the Cl, and we can look at the K and we can look at the I. Now, we could do this probably without having the numbers in front of us as well, but I put them here just so you can put some numbers on it as well. So we have potassium and chlorine, which has a fairly large difference, and we have potassium and iodide, which also is a fairly large difference, but not quite as much. So because Cl minus, or excuse me, because Cl is so much more electronegative than Ki, or than I, KCl is going to have more ionic character. The difference in electronegativity between here and here is more than the difference in electronegativity between here and here. K and I are going to share those electrons just a little bit more evenly, where K and Cl, the Cl is going to kind of be the bully here and take away more of the electron density, because it's more electronegative. And so KCl has a greater difference in electronegativity. The Cl is pulling on those electrons more. It's making that bond more ionic. And here's a sort of graph that kind of graphs this out, so you can see a little bit too. So here we have electronegativity difference. So that's if we take something from here and here, and we subtract the two values, our electronegativity difference. And we have percent ionic character. So you can measure how much those are ionic, or more you're just polar covalent, or not at all. And so we get this sort of structure, or this sort of line, where the greater the electronegativity difference, the higher the ionic character. And there's a few little outliers and things of that sort, but for the most part, it follows this curve. And so if you want to know how much ionic character something has, you look at the electronegativity differences. The bigger the difference, the more ionic character it has. Okay. So there is exceptions to this idea with polar being a difference in electronegativity. And there's one example that sort of epitomizes this. That's kind of interesting. So we have O3. So this is a weird example. Now, with O3, you could kind of think, oh, well, that's going to be like I, right? That's going to be like I3 minus. All of them have the same electronegativity. And so there's not going to be any sort of polar issue here. It's not quite how it works, though. If you draw out the Lewis structure for this, here I just have the line structure drawn. So I don't have any lone pairs. But if you were to add on the lone pairs here, so count up how many electrons O3 has. And then where am I missing? So there'd be two lone pairs here, three here, and one somewhere else. And we'd be on the oxygen. And we haven't gotten into geometry too much. But I think you can see that, hey, if there's a lone pair here, that lone pair is going to take up some room. And so that lone pair is going to kind of push at this direction. Now, if you look at how these electrons are distributed, you notice there's formal charges here. And there's nothing we can do to fix that. We can't put a double bond here to fix it like we did with our POCl3 example. And if you look at it, keeping in mind that there's a lone pair here, why couldn't we put this bond here? It's going to break your octet rule, right? If we had two, four, six, eight, ten electrons on an oxygen, it's not allowed. You can't do that on your second period elements. So we're stuck with this sort of distribution of electrons, which means that if you look at this resonance structure, there's going to be a delta positive and a delta negative on it. And so it's kind of this weird situation where you have three atoms that are identical. They're all oxygens. So they all have the exact same electronegativity. But the distributions of electrons set it up so that they're going to actually be polar. It does have a dipole. It is going to be polar. And so we'll get more into the details of the geometry and more into the details of how you know if an atom is polar. But it is interesting to note that there's some exceptions to some odd situations that you need to kind of keep in mind. It's not always quite as simple as just looking at the differences in electronegativities. We're going to have to look at geometries and things too. OK. So before we go do some more examples and looking at polar bonds and things of that sort, I thought there was some interesting applications that sort of combined the two things that we learned, chapter one and this. So if we take, and I've shown you some of these before, and if we have something like microwaves, so we know what microwaves are now. We have an idea of what microwaves are now. We know that they're just electromagnetic radiation and that this electromagnetic radiation, and that this electromagnetic radiation has wavelengths, and we know that the only thing that really makes microwaves special in this case is that it's in that particular region of the electromagnetic spectrum. And so if we're in that particular region of the electromagnetic spectrum, what happens is that these microwaves are able to actually go ahead and excite water molecules. So when you microwave something, what you're really doing is you're taking water molecules and you're moving them around. And so there's this little demo from the FET website, which I think I've shown you before, where if we turn the microwave on, those waves are going to take and they're going to bounce water molecules around. So you turn on your microwave, you have water molecules moving around, and all of these are bumping into each other, rubbing against each other, and when things do that, what does it make? It makes friction, right? And if you have friction between things, you go like this, you build up friction in your hands, what happens? Your hands get warm. So the same thing happens here. You use microwaves, things that we learned about in chapter one, and microwaves excite polar molecules. Okay, so they excite polar molecules, so water is going to be polar, right? We have oxygen, which is really electronegative. We have hydrogen, which isn't super electronegative, more than you would think for where it's at on the periodic table, but it's beside the point. And so definitely polar molecule, so the microwaves can excite those, bounce them around, and the friction heats up the food. Now, why can't you put metal in a microwave? Why would that make any difference? Well, it's not necessarily that you can't put metal in a microwave. It's that you have to worry about this, this isn't necessarily related to this, but it is an interesting little concept. So it's not that you can't put metal in a microwave, it's just that if you do, if you put a container, a big heavy container in a microwave, it's going to block all the electromagnetic radiation. It's going to take, and it's going to stop all the microwaves from getting into the water. And so the water can't heat up, and then therefore your food can't heat up, because the metal just bounces the electromagnetic radiation, the microwaves, back at the thing. Now, that's for like real metal, or heavy metal, you know, big bowls of metal, things of that sort. If you ever put tinfoil or just small things of metal into a microwave, then you may have noticed that suddenly you start getting sparks, right? And that's just because now you're kind of exciting the electrons around. So this actually shows you how you can do it in a very controlled manner. Maybe. There we go. So here they have a microwave, and they're going to put a light bulb in it. So a light bulb is good in the sense that it has metal in it, so it shows you what happens if you put metal in it, yet it's also very controlled. It's in an inert atmosphere, so it's not going to burn, it's not going to spark, but it will light it up. So you can see that, okay, it's really just that you're burning metal, right? You're sending electricity through metal by heating it up. And so this shows you that, hey, you can do it if you have it in an inert atmosphere. It's just normally you get the sparks because the metal, the electrons are going through the metal. This is kind of a cool little video that shows you that. So that's a whole different phenomena than the way that the water works. So they're not necessarily related. Okay, so now we're going to go back to those Lewis structures. The Lewis structures I've told you we're going to keep out pretty much the entire time, all the time. And we're going to go through, we're going to decide which bonds are polar. Now be careful here. We're not deciding which molecules are polar. We're deciding which ones have polar bonds. We can't really get into the idea of polar molecules quite yet. You just don't, we haven't quite covered that material. But we can go through now and we can say, okay, which bonds are polar. And so we're going to go back through all of them and decide that. Okay, so back to the very beginning. Give it a second to adjust. So if we look at N2, so this is kind of a nice example to start off with, right? We have a nitrogen bonded to a nitrogen. So there's not going to be any difference in electronegativity there. They're all going to be the exact same. So this one is going to have no polar bonds. And now moving on to the next one. If we have carbon hydrogen and oxygen. So wrong one. We have carbon hydrogen and oxygen. So this one we had drawn out like this. We had decided that this was the proper way of drawing it because of the way the formal charges worked. And because of how our octet works. Now if we look at this, which ones are polar here? So if you look at carbon and hydrogen, there's a very, very, very tiny difference in electronegativity. It's such a small difference that we don't consider that a polar bond. But now between carbon and oxygen, if you look at those, that's definitely polar. So this one has polar bonds. And again, we're just deciding if the bonds are polar. We're not deciding if the molecule itself is polar. We're not quite there yet. We have to do some VSEPR theory first. Now, if we look at XCF4, 4 plus. This one, we have a noble gas in the center. So noble gases are definitely not very electronegative. And we have all of these fluorines. So all of these fluorines are definitely very electronegative. So all bonds are polar. So that one, just they're all going to be polar. And then we'll come back and we'll decide which one of these have polar molecules here in a few lessons. So now let's look at our NO2. So this was our big complicated example, right? Where we had to go through and we had to draw all these different, what we're now calling, resident structures. Where we moved the electrons around. But this resident structure was different than the other ones because these were unequal, right? The other ones were all equally shared. And so those were resident structures that you had to draw all of them. This one, you just have to draw the most stable one because this one is better than these options. So we're just looking at this one. And we say, well, which one of these have polar bonds? Well, this bond has a formal charge that makes it a little bit, but we'll not pay too much attention to that one. So let's look at this bond, though. This bond is definitely polar, right? We have a nitrogen and oxygen. We have a negative formal charge here and a positive formal charge here. And so this one's going to definitely be polar bond. Okay. Now we look at SF6. Okay. So we look at SF6 and we look at the differences in polarities and where they are. And we know that our nitrogen and our oxygen and our fluorines are super electronegative. They all fall on that section of the periodic table where they're very, very highly electronegative. Sulfur, not so much. It's sort of more in between. And so every one of these bonds are polar. Now you'll notice as we're going through these, and this will happen throughout the rest of the quarter, sometimes I do skip certain Lewis structures for certain applications. And that's just because it doesn't completely apply or there's extra complications that I don't want you to worry about too much. So if you notice that we're skipping a few of them, just don't worry about those. I skipped them on purpose. Okay. So now we have this one. This is very similar to the other one we did, right? We have Xc and Fs. The only differences is in our other one we didn't have these lone pairs. And that's going to matter later on when we talk about geometries and things of that sort. But for right now when we're just looking at polar bonds, that's the exact same thing as before. Very not polar, or excuse me, very not electronegative, very electronegative. So all bonds are polar. Now we have H2SO4. And so again electronegative elements with not very electronegative elements. So all of our bonds are polar. Now for the most part, and there are exceptions to this, but for the most part you need to be able to decide whether these are polar bonds or not just by looking at them. You don't want to have to be going to the periodic table all the time to make this decision, or excuse me, a periodic table with actual numbers on it. You'll always have a periodic table in front of you. But you don't want to have to constantly go to the ones with numbers. So when you're doing your homework with very few exceptions, you shouldn't really be looking at that periodic table that I had in there that had all the numbers on it. You should be just looking at a normal periodic table, like just normal ones that we have around, and deciding based on where they are in the periodic table which one's going to be more polar, which one's going to be more electronegative. What are the differences? How close are they? How far away are they? And decide your polarity based on that. So now we have POCl3. And so if we look at this, we can decide, OK, are these bonds polar? Well, oxygen, that's one of our three really electronegative elements. Chlorines and our halogens, so those are very electronegative. Phosterous, not so much, right? Third period, so it's not super far up in the periodic table. You know, group over, so not very electronegative here. So these are all going to be polar. All right, a few more to go. Clf4 minus. So this one may or may not be a little tough if you, depending. So a lot of times you think of chlorine and fluorine as both being really electronegative, but there is still a pretty big difference between them. Fluorine is still going to be a lot more electronegative than chlorine. So even though we are comparing two pretty highly electronegative elements, fluorine still wins. Fluorine still gets to steal more of the electron density. And so all bonds are polar here, too. So this is sort of one where they're both very electronegative, but they're still going to be all polar. OK. So that takes us up through our Lewis structures. And I think we'll end there for the day. And then next class we'll finish up chapter two and start going into some VSEPR theory and deciding how we look at these molecules and actual geometry, as opposed to just into a 2D Lewis structure.
Chem 1A is the first quarter of General Chemistry and covers the following topics: atomic structure; general properties of the elements; covalent, ionic, and metallic bonding; intermolecular forces; mass relationships. Index of Topics: 0:00:17- Breaking the Octet Rule 0:06:44 Resonance Structures 0:14:28 Delocalized Electrons 0:16:40 Line Structures and Rules 0:24:06 What is the Formula... 0:27:55 Benzene 0:30:30 Electronegativity 0:40:06 Microwaves 0:44:29 Back to Lewis Structures
10.5446/18972 (DOI)
So last time we had started talking about our different periodic trends and we had started with effective nuclear charge because that's one of the main reasons behind all of our other trends. So today we're going to continue with that and we're going to start up where we left off doing examples of ionic ones. So we had talked about a few of them so it's time to do a few more so you can see exactly the sorts of things that you're expected to know. So here's a little periodic table for you to be able to use as we go through. As always when you're listening to these lectures you should have the periodic table in front of you so that you can follow along even when I don't have it up on the screen. So let's do these two first or these three first. So we have lithium plus, beryllium 2 plus in fluorine minus. So you can find all of these on the periodic table. You can say well there's lithium 1 plus so that has the same sort of electron configuration as helium. Beryllium 2 plus which would also have the same electron configuration as helium and then F minus. So when we go through and we put these all in order we would want to put our plus 2 plus charge first because that's going to be our smallest and then go through and put these in our 2 plus first because this is smallest and our lithium 1 plus because now we have a little bit less of an effective nuclear charge and then our F minus because with F minus now we have that whole extra row of, that whole extra row as well as the fact that it's a minus charge. So the effective nuclear charge is lower and so that gives the electrons a little bit more room to spread out. The combination of which makes fluorine definitely the largest. So these two are isoelectronic so we're just looking at the charges. Fluorine now that has a whole another row of electrons and it has a lower effective nuclear charge with the minus charge. Okay, next one. So we have copper 1 plus, copper 2 plus and potassium. So we go to our periodic table and we find them and so there's copper. So copper 1 plus and copper 2 plus would just have their S electrons removed. Remember always removing from the S electrons first. So then we look at where potassium is, same column or same row but all the way over to the left. So that's going to be isoelectronic with argon. So when we look at these and we put them in order, the first thing to look at is our charges which means that, or which one's going to be our smallest? Well, the Cu2 plus, that's definitely going to be our smallest. So then we go through and we say well what about copper and potassium? Chances have the same charge and they're in the same row of the periodic table. So which one's going to be smaller there? At that point we backtrack to what we did last class and we look at where they are in the periodic table and we do it just like we would do atomic radius. So what happens as we go across the table this way to our radius? Well, to know that we have to think about what happens to our effective nuclear charge. So as we go across the periodic table that way, our effective nuclear charge becomes greater. So our protons and our nucleus are pulling in those electrons harder and it's making it smaller. So something on this side of the periodic table is going to be smaller than something on this side of the periodic table. So between copper 1 plus and potassium, the copper 1 plus is going to be smaller because it's further to the right. And so we have this ordering. This is the smallest because it's both furthest to the right and the highest charge. And then this one because it's further to the right than potassium. So even though they're the same charge, at that point you backtrack to doing it how we did atomic nuclei. Okay, next one. So now these are all ions. They're all our halogen ions. They're all a minus 1. So there's nothing here really to look at as far as charges go. So the only thing we need to look at is our trend as we go down the periodic table. So as we go down the periodic table, we're adding shells of electrons and each time we add a shell of electrons, it's going to get larger. And so because of that, our smallest is going to be up here and our largest is going to be down here. So that makes fluorine our smallest and then bromine our largest. And so our ordering is like that. So this is probably the trickiest one of the group, but keep that in mind. It's really no different than this. Just like here we can say, well, we're going down the periodic table. They're all the same charge. And so all we're looking at is the trend. We're doing the same thing here. We just looked at the trend going across the periodic table. Okay, let's do one more where we kind of do some matching. So I'm going to give you a picture with these four ions and I want you to match up their pictures. So we have sodium, magnesium, chloride, and oxygen in their ionic forms. So let's think about how we can do this. So again, you're looking at your periodic table and you're thinking, what are the size differences here? So I have them as relative size differences to each other. We know in this case that I have a one to one ratio. So if we look and we say which one's going to be our smallest of these ions and which one's going to be our largest of these ions, we can say, well, magnesium's definitely going to be our tiniest, right? Because it's only a plus two or it's a plus two. Where oxygen's going to be our largest because it's a minus two. So there's not nearly as many protons per electron. So that would make this our magnesium oxide and this our sodium chloride. Since the sodium and the chloride are sort of in between. And so we can go through and we can label and say, well, this is our biggest of the grouping. And not only that, but we know that magnesium oxide would form a one to one ratio and that sodium chloride would form a one to one ratio. So that's one where we can kind of go through and match up. Okay, so that's sort of it for our atomic radii and our ionic radii. Now we're going to move on to first ionization energy. So what ionization energy is, well, let's think about the wording here, ionization. So the amount of energy it takes to ionize something to pull off an electron, to take an electron from an atom and pull it away. So we have some trends here that we already sort of generally showed you earlier. But as you go across the periodic table this way, that ionization energy is going to increase. And as you go up the periodic table, it increases. Now this one has a ton of exceptions that we're going to talk about in a minute. But first of all, let's explain why this trend happens. Because it's never enough to just memorize the trends. I want you to know why they happen and I will ask you that. So as we go across the periodic table, there's something else that increases. So think about what we talked about when we talked about atomic radii and why it increased. And then the trend before that that we talked about being effective nuclear charge. So that effective nuclear charge has to, it's the reasoning behind a lot if not most of the other trends. And this one's no exception to that. As you go across the periodic table, your effective nuclear charge increases, right? It was like adding more strength to that inside nuclear magnet and the outside electrons, but not adding another shell. So we weren't moving the electrons further away, they were the same distance away. So that effective nuclear charge increased. So if effective nuclear charge is increasing from this side to this side, well now you have to look at it and you have to say, okay, well now you're saying my electrons are being held on tighter to my nuclei. So if I want to remove one of those electrons, I have to pull it away with more strength. If you think about putting two magnets together, which is harder to yank the magnets apart with, something that's a very weak magnet or something that's a very strong magnet. Of course, something with a very strong magnet. So this is the same sort of idea. You're trying to pull an electron away from something with a very high effective nuclear charge if you're over here. Over here you have a lower effective nuclear charge, so in comparison it's going to be relatively easy, so your energy will be relatively low. Now with that being said, there's a whole bunch of exceptions. But they follow the exceptions, follow their own little trends. So we can look at these a minute and see why they are. And they're all going to be from half and fully filled shells. So this comes from an older book from Chang, but it's also highly edited. So I've added in some things. So I want you to make sure you copy this down. Now with this, we have each of the elements as we go across the periodic table. And I want you to explain why there's these exceptions. Because as you start on your second row and you work your way across, you see that we have helium and then lithium. Now you would be expecting it to just go beryllium, boron, carbon, nitrogen, oxygen, fluorine, and neon all the way up straight. Yet you have an exception here and an exception here. So to do this, oh, and then of course the same thing over here. So to do this, we have to go back a little bit and talk about electron configurations again. So now that you're good at electron configurations, write out the electron configurations for these. So for beryllium, you'll go to helium and you'll say, OK, well, now the two S's are filled and you'll have two S2. Now if you write down boron, you'll see you're one further on the periodic table. So it's going to be two S2, two P1. So what happens here and here is that you reach the sort of level of stability here that you don't have here. And so boron has this one little electron sitting in its P orbital. So that's a lot easier to remove than trying to remove this. So this one's going to be lower than beryllium because with beryllium you're removing from a fully filled shell and there's a relative stability here. With boron, you just have two P1. There's no sort of added stability that comes from that one little electron being there. And so it doesn't take much energy to remove it. So now let's see if you can do nitrogen and oxygen. So you're looking at your periodic table, you're finding nitrogen on it, and then you write down the electron configuration. So it's going to start off the same as these. We're going to start with helium and then two S2 and then two P3. Now this is something we haven't completely talked about yet. But when you have a half filled sub-shell, there's a level of stability there too. It's not quite as good as having a fully filled shell, but there's still extra stability. And so what happens there is that then when you move to the oxygen and you add this fourth one, you get a little bit of a repulsion from that. When you add in that one extra electron, there's sort of this combination, extra stable half filled shell, and then added a little bit of repulsion from adding in that next electron and having it be the opposite spin. So that combination of the two issues means that oxygen is going to be a little more willing to lose that one extra electron and become the same sort of electron configuration as nitrogen. So this one, it might be good to also draw out an electron configuration diagram on your own and see how this works. That this you would have your P orbitals and you would draw your one electron per orbital. You'd see that it's half filled. So adding this one extra electron destabilizes it a little bit and you end up with this little exception. Now on your own at home, what you're going to do is I want you to write out the electron configurations for magnesium and aluminum and phosphorus and sulfur and prove to yourself that it's the exact same issue. So when you do that, what you'll see is that basically just the numbers, the end value will change. And so go home and do that and prove to yourself why this is like it is so that you could explain it to me if it showed up on an exam or something. So let's do a bunch of examples now with this. And remember you're always keeping this in mind that when you get to this half filled shell, you might have some exceptions that you need to look at. And when you have here, you're going to have some exceptions you have to look at. Now as for out here, you can look at the exceptions, pay a little bit of attention to them, see where they are. They're interesting, but I'm not going to test you on all of them. That goes into a little bit more inorganic chemistry that we're not going to get into in this class. But these we can definitely explain just using electron configurations. So that's something that you should be able to explain. Okay, so same idea as before. Let's rank these. So let's first do some ones that don't have exceptions. Now of course, when you're given this and you know, any sort of examination, you wouldn't be told whether they're normal or not. But in this case, we're going to start that way. So helium, neon, and argon. So you're going to go to your periodic table and say, okay, well, these are definitely noble gases. So they're going to be over here. Helium, neon, and argon. And rank them in order. So which one's going to be the easiest to pull off? And for that matter, why? we didn't necessarily talk about the trend going down. So going down, we know that it's easier to pull off from things lower on the periodic table. Why is that? Well, that's because they're further away, right? So if you have two magnets like this and you try to yank them apart, it's going to be a lot easier to pull them apart than if you have two magnets that are nearly touching and pulling them apart. Same idea here. The electrons are further away and they have all of those inside electrons. So remember, there was a phrasing for that too. You have all the inside electrons that are blocking the outside electron from feeling the nuclei. So the buzzword that went along with that one was shielding, right? The inside electrons are shielding the outside electrons. And so they're able to be pulled off easier. They're further away and they're shielded. So with that in mind, helium is going to be our smallest. And then neon and then argon. So sorry, I said that backwards. So the furthest down one, I did decreasing. So we want lowest to highest. So argon is going to be our lowest and then neon and then helium. So we start with the lowest one down here because the electron and the nuclei furthest apart and it has the most shielding. So it's going to be able to be pulled off very easily. It's not going to take very much energy to pull off that electron. Something like helium though, now there's no shielding, right? There's no electrons between the helium's electrons and the nuclei. So that's going to be very hard to pull off. There's no shielding and they're very close to each other. OK, next one. Boron, lithium and beryllium. So now if we look at this, we have lithium, boron going this direction and neon. So they're all in the same row and they're all going just straight across. So our trend going that way has to do with effective nuclear charge. And which one has the smallest effective nuclear charge? Well, lithium does, right? Because it's over on this side. And so if it has a very small effective nuclear charge, then it's not going to take as much energy to pull off the electrons. And so the ordering of it will have to do with that. It doesn't take much energy to pull off the electrons, the ionization energy is very low. So because lithium has a low effective nuclear charge, it also has a low ionization energy. And then so on across the periodic table. OK, so those are our normal ones. So those are the ones that follow the rules, right? There is no weird things going on with their electron configurations or anything like that. That's not always going to be true. Especially when you start looking at boron. When you see a boron, beryllium, or nitrogen, oxygen, and anything below it, that's when you want to start looking at their electron configurations. So now let's do lithium, beryllium, and boron. Lithium, beryllium, and boron. So let's say there wasn't any exception going on and it just went straight across the periodic table. We would know to rank this as our smallest, and then beryllium, and then boron. Now we have to think back to the last slide, though, and we have to say, OK, how are the electron configurations factoring in here, though? Well beryllium here is at a fully filled shell. It has this relative stability. Boron has this one extra electron in this p-orbital that isn't in any sort of half-filled or fully-filled state. So with that electron being out on its own, it doesn't necessarily mind losing it. It doesn't take much energy to lose it. Where beryllium is saying, well, I'm kind of at this level of stability. I don't want to lose that electron. So instead of going lithium, beryllium, and boron, the beryllium and the boron flip-flop. So you get lithium, boron, beryllium. OK. And then our next one, C, N, and O. So right here across the periodic table, and again with this one, if we were looking at this and we said, OK, this is going to follow the trend, well we would do it in exactly this order. We would go C, N, and O because C has the lowest effective nuclear charge. Oxygen has the highest, so oxygen would be the hardest to remove. But it's not only based on the trends. It's also based on the electron configurations. And if we look at the electron configuration for nitrogen, it already has a half-filled shell. Oxygen, you add in one extra electron. So you're not making it fully filled. You're just adding or you're just taking away the one. So in this case, oxygen doesn't really want that extra electron necessarily. It adds a little bit of a repulsion. It doesn't add any stability. So oxygen's OK with losing it. It's not going to take a lot of energy to remove that one. Where nitrogen has a half-filled shell, so it says I don't really want to lose another one. I don't want to lose an electron and be stuck with two in my P orbital. And so instead of going with the trend, the oxygen and the nitrogen flip-flop. It doesn't take as much energy to remove the electron from the oxygen as it does the nitrogen. OK, so those are our examples. Some following the trend, some not. And you should be able to recognize any of these regardless of whether I tell you if there are exceptions or not. That was just for an intro. OK, next thing then that we're going to talk about. Second and third ionization energies. Now this is something that I could expect you to kind of rank within an atom or have you look at the electron configurations to figure it out. So what a second and third ionization energy is, is after you pull off the first electron, well how much energy does it take to pull off the second one? How much energy does it take to pull off the third one? That's what you're going through as you do second, third, and so on with ionization energies. So your first one is always going to be your smallest, which sort of makes sense, right? If you take one off, now you have more protons than you do electrons, right? You have a cation, you have a positively charged ion. And so of course it's going to take more energy to pull off the second one because now you have more positive charge for negative charge. So each one that you remove, the energy is going to go up a little bit. Now you'll have some homework problems on trying to rank these and you end up having to have a lot of data form. So that's why we'll leave that for the homework. But what happens is, is that as you go through, you'll be able to see big jumps. So let's go back here so I have a periodic table to explain with. So for instance with beryllium, you'll see a certain ionization energy and then that means it would be down to a 1s or 2s1. So pulling off that lithium, it's going to, that second 1s electron, now it's going to be a little bit bigger, but it's still going to be relatively small. Which point you'd be iceway electronic or you'd have the same electron configuration as helium. And when you go to pull off that one, it's going to be a little bit larger than the first and the second one, but it's going to be a huge jump. So you're going to see a little bit of a difference, a little bit of a difference, a huge jump. And that's how you know how these work. Is you look at the differences between them. First is always the smallest and then they get bigger and bigger and bigger, but you can tell what sort of electron configuration you're dealing with based on the differences between them. And you'll have some, you know, examples of that in your homework. Okay. So that sort of wraps up ionization energy. Now electron affinity and then we'll also talk about electron negativity even though that doesn't technically fall in this chapter. The two are very similar and so I want to talk about them together. So electron affinity is sort of the opposite of ionization energy. With ionization energy we had an atom and we took away an electron. With electron affinity, we're going to be doing the opposite. We want to know how likely is that atom to take on an electron. So to, one atom by itself to bring on an electron. Now so for instance something like chloride, chloride taking on an electron to become chloride minus. So with ionization energy we were making cations. We were taking away electrons. With electron affinities we're adding electrons so we're making anions. We're making negative electrons or negative ions. Now with these the trend sort of follows a lot of the other ones we've been looking at where it increases as you go up the periodic table and increases as you go to the right on the periodic table. Now I wanted to just show you this and this, I'll show you this in a few different ways for electron affinity. But this website is really great for looking at periodic trends. You, especially if you're a visual person, you can go in and you can graph any of the trends that we've been talking about in the periodic table format. You can do it in a whole bunch of different ways. If you like numbers, it'll fill in the numbers for you. If you like seeing it in the sort of like cityscape idea, it'll do that. You can do it square circles so that you can see it in a lot of different ways and you can kind of see where the trends fall. I think helpful for a lot of people to see it that way. I picked two different ways of showing you electron affinity in it, but there's tons of different ways to display the data. Now let's talk about why these trends are. At this point, you want to still be thinking about all your other trends and how we explain those because they're pretty similar explanations. As we go up on the periodic table, we're increasing, or I guess we're getting smaller, right? So, if you think about going down the periodic table, we'd be getting bigger. Our electrons are already further away from our nucleus. Our outside electrons aren't able to feel the nuclei very well because of one distance. They're just further away. And two, shielding. The fact that those electrons on the inside are blocking the electrons on the outside from feeling the nuclei. As we go down the periodic table, the electron affinity gets smaller because it's going to be further away and there's more shielding. As we go across the periodic table this way, now what is increasing? Our effective nuclear charge is increasing. As we go across the periodic table, our effective nuclear charge increases. Our electrons are able to feel our nuclei a little bit better. So, a new electron will be able to feel the nuclei a little bit better too. As we go across the periodic table, it gets larger. Now you'll notice there's a lot of exceptions to this, just like there was with ionization energy. And if we had graphed ionization energy like this, it would actually look kind of similar. So go play around with the website and graph it and see. So you'll notice where your exceptions are here. Well, there's this exception here with this 2S2, or excuse me, S2, depending on what N you're in. And then there's this other exception right here, which is your nitrogen row, where this is that same idea as when we talked about ionization energy. So there's that set of exceptions. We'll do these in more detail later, but this is just so you have a picture going into it. And now, same idea as every other trend. This trend works well until you get to this halogen. So if you count over, you'll see that this is the seventh column, right? This is your halogens. So what's this hidden block here that you can't see? Well, that's all of your noble gases. And they're really low. They're really low for electron affinity, ionization energy, all of that. And that's just because they're already really stable. They don't want another electron. They don't want to lose an electron. And so they're going to be very low, and they're going to break all of those rules. So the rules work really well except for when you have these exceptions because of electron configurations, right here, right here, and then, of course, your noble gases. OK. Now I want to show you some numbers, too, in case you like numbers, so that you see that they are quite a bit different. So this shows you the exact same thing that I showed you here, except in number form, so that you can see it. You'll see that these were actually so low. They really were pretty much zero. And even negative, depending. And you'll see the same thing with nitrogen. So we have those same sorts of exceptions we talked about with ionization energy. And it's for the exact same reason. It's because of the electron configurations. Here you're already a little bit stable, not going to want to add another electron to have this electron configuration. With nitrogen, you're already at a half filled shell. You don't really want to add another electron and become like an oxygen. And so because of that, you get these exceptions. And then you'll notice your noble gases all the way down. Your noble gases aren't going to take on another electron. They're not going to become an ion. OK. So let's go through and actually walk through some of these electron configurations so that we can see how this works. OK. Why do group 1A and 7A form stable atomic anions? So your 1A is your sodium row. And your 7A is your halogens. So first of all, let's start with your halogens. Mostly because those are the ones that we're used to looking at having an anion. To be honest, we're not super used to looking at this as ions, or as anions, I should say. OK. So we write down our electron configuration for fluorine. Now quickly write down your electron configuration for a fluorine minus. So of course, you'll add an electron here. And you'll end up with this. So now let's think about why this would be. Why is our 7A going to form this anion? Well, if you give it a negative 1 charge, you add an electron. Fluorine has a high electron affinity, right? It has a very high electron affinity. It wants to take on that electron. And up until now, we've sort of said, well, it wants to be like a noble gas. So that's true. This definitely gives it a noble gas configuration. And so that's one of the reasons behind why it has a high electron affinity. It's on that right hand side of the periodic table. It has a high effective nuclear charge, thereby having a high electron affinity, and then you fill it shell so it becomes like a noble gas. So all of those reasons are good reasons for it to want to be a negative 1 ion. Now let's look at our next one, sodium. So if we write down our sodium, we have this. And we write down our sodium minus. We get this. Now, we may not be used to looking at sodium as a minus, but it's possible. It's not quite normal. Normally what do we do? Yeah, we just take away this electron and we make it a plus charge. But we couldn't, in theory, do that. It's not the worst thing in the world for it. And it gives it a full sub-shell. So it's not going to fill its shell like with fluorine, but it does give it a little bit of added stability. So if you put it under the exact right circumstances, it could do this. And it would be a little bit more stable than a sodium on its own. So of course, it's also more stable to just remove that electron and become a plus ion. But either one is more stable than just sodium. OK, now our noble gases. Why don't they form stable atomic n ions? Why is their electron affinity so low? So let's pick one. I picked neon. You could pick any of them, though. And write out the electron configuration for it. So you write out the electron configuration. And everything's filled. Everything's stable. So it's not going to want to go ahead and take on another electron. It's not going to want to be a negative charge, because it doesn't add any stability to it. And so its electron affinity is basically zero. Now, let's look at one more. And let's look at nitrogen. Why would this one not want to form an ion? So why would we not have a nitrogen negative one? Well, if we did that, we would end up putting a 2P4 here. And that extra electron causes a little bit of repulsion. So there's this extra repulsion that comes from having to pair one electron. Now if you pair all three of them and you get yourself to a noble gas configuration, well, OK, that has a level of stability to it. But just adding one lowers the stability of it. And so that's not going to happen. So that's why nitrogen, if we go back to this page, has basically zero. And that's also why, then if we want to look at the one before it, we can see that this row breaks that trend. It breaks that trend and it goes lower because of that half filled shell. OK. Now we're going to move on to what technically is chapter two. That I want to cover in chapter one, and you'll be tested on it in chapter one. And that's electronegativity. So this is very similar to electron affinity, and people get it confused a lot. So be careful that you realize the difference. This is the ability of an atom to attract electron density in the chemical bond. So it's not attracting a whole electron and bringing it to itself. That's electron affinity. This has to be in a bond. Now the nice thing about it being in a bond is it takes away most of the exceptions. So make sure that you can identify the differences between these. And this is why this is technically in chapter two. Technically we haven't learned about bonding yet, but you wouldn't be in this class if you didn't know what bonding was. And we talked about it in the fundamentals. So this is going to follow the exact same trend as electron affinity and ionization energy. And it's going to do so for the exact same reason. So you don't really have to memorize the reason behind these for every single one of these. They all have the same reasoning. As you go across the periodic table this way, what's increasing? Your effective nuclear charge is increasing. And if your effective nuclear charge is increasing, it's holding onto those electrons a little bit more. And so since it's holding onto the electrons a little bit stronger, it's going to be a little bit less likely to lose an electron. It's going to be a little bit more likely to attract an electron. Or in this case, it's going to be a little bit more likely to pull the electron density toward it if it's in a chemical bond. Now as we go down the periodic table then it gets smaller. And that's for the exact same reasoning too. As you go down the periodic table, the electron shells that you're adding electron shells, you're making those outside electrons further away from the nuclei, which means that the outside electrons have less of an effective nuclear charge because of distance and because of shielding. And so as you go down the periodic table, your ionization energy decreases because it's easier to pull off an electron, your electron affinity decreases because it's less likely to attract an electron, and your electronegativity decreases. Because it's already having enough, you know, it's already less likely to hang onto its own electrons, it's a lot less likely to attract electrons in a bond. So all of the same reasonings behind the other ones are true for this as well. There are some exceptions to this, and those exceptions are in the D block and fall into the realm of more intense inorganic chemistry that you can learn if you take inorganic chemistry. So we're not really going to mess around with exceptions here. So let's look at this so that you can get sort of a visual for it. You can see that, yeah, this looks a lot different than our electron affinity and our ionization energy pictures, right? As you go across, it's almost a pure gradient. And sure, you can see some exceptions here. And maybe after our discussion about electron configurations, you may actually be able to figure out some of them. We're really not going to worry about it too much. Just note that as you go this way, it increases. Now let's talk about why there's not exceptions, because you may not need to know the exceptions for this, but you should know why there isn't exceptions in electronegativity, but there is an electron affinity and ionization energy. So what's the main difference between the two? Well, let's first just talk about the differences between the two, and maybe at that point you'll know why. So they have the same trends, right? You just looked at the trends for both, they're the same trends. They're both talking about the ability of an atom to attract electrons. Now your main difference comes in, in the fact that electronegativity is the ability to do it in a bond. So it's attracting electron density. It's not attracting A electron, it's still sharing the electrons, but it's just attracting more. It's being unequal sharing. Now electron affinity is the ability of it to do it in an isolated atom. So an atom on its own, not bonded to another atom, what is the likelihood of it attracting a whole electron on its own and becoming an anion? As opposed to electronegativity, where you're talking about the ability of it to attract it in a shared bond. So this example would be something like CO2. Something where oxygen, if you find that on your periodic table, is much more electronegative than carbon. So if you were to look at where all the electron density is sitting, most of it's being stolen by the oxygen. The oxygen is taking most of the electron density and not sharing it with the carbon. With something like electron affinity, you're going to be talking more like this, where you have a chlorine atom on its own, not bonded to another atom, taking an electron and becoming a Cl-. Okay, so now with all of that in mind, why would you have exceptions for one and not the other? Well, if you're talking about electron affinity, for that matter ionization energy too, why is it that we had the exceptions? What is it that made those exceptions happen? How did we explain them? We did that with electron configurations, right? We said, well, the electron configurations say that we're at a fully filled sub-shell or we're at a half filled sub-shell. So why wouldn't that matter for something in a bond? They already have an octet, right? Everything in a bond, you formed that bond for a reason. With CO2, we formed the CO2 bond so that this would have an octet. It's sharing with the oxygen to form an octet. And the oxygens are sharing with the carbon to form an octet. So there's no exceptions here because the electron configuration is already stable. We've given everything its octet. We've made everything stable. So the only thing that plays a new role in how much electron density an atom is going to pull toward itself and steal from the other is its effective nuclear charge. And of course, as you go down the periodic table, how much shielding there is and all of that which affects the outside shell's effective nuclear charge. So that's why electronegativity doesn't have as many exceptions as electron affinity. Okay. So now I want to go through and explain something that we talked about in the fundamental section. When we talked about in the fundamental section, I said, here's a little trick to help you memorize and we'll explain it later. Now it's later, so let's explain it. The inert pair effect. So we're kind of, I shouldn't make mention, we're moving on from periodic trends. This was the last of the periodic trends that we're going to talk about. So we're sort of done with that. And now we're going to move on to a couple of different things that we need to talk about that come up with the periodic table. So it's still sort of in the realm of it, but just not quite as general. So the inert pair effect. So this was, I pulled this up during our fundamentals talk to say, hey, this will help you memorize how to do these ions because you can look and you can say, well, look, these are off by two. These are off by two, off by two. And so it gave you some hints for memorization. Now let's go through and explain why these are how they are. So we have this little miniature section of the periodic table there. So let's to do this right off the electron configurations for everything. So first we're going to do two. We're going to do antimony and lead. So that we'll do two examples. And then I'll leave these other four for you to do at home for practice. So we'll start with SB. So we start here. So you should have your big periodic table in front of you so you can see where this little section is. And let's write out the electron configuration for just the neutral compound. So if we do that, we're left with the noble gas, 5S2, 4D10, 5P3. So if we go to remove our electrons, where are we going to remove it from? Well our first step is to remove our P electrons, right? And this sometimes gets confused in the midst of the different rules for where you remove from when. When you're in the S block, you remove from the S block first. When you're in the P block, you remove from the P block first. That switch comes when you're in the D block, then and only then you switch around. And then you remove from the S block first. So we are well within the P block of the periodic table. So we're going to remove the P electrons first. And it becomes the noble gas core, 5S2, 4D10. Okay, so what charge does that give us a 3 plus charge, right? We just took away the P electrons, taking away three electrons and electrons are negative. So if we take three of them away, we're left with a cation. So we take the cation and we make it to SB3 plus. So now we're basically in the D block, right? We're basically sitting here at the cadmium electron configuration. So now where do we remove from? Now is where we remove from the S block. So when we remove from the S block, we remove both of these electrons. And we get the noble gas core plus 4D10. Okay, so these are our valence electron configurations. So you'll see, so the reason that this is too different is because at this point, you remove from your S block. So you removed two of them. Now, you wouldn't remove just one of them, right? Because now you're at that sort of in between, you're not stable, so you wouldn't want to just remove one. So you're either going to remove all of your P's and none of your S's, or all of your P's and all of your S's. Okay, so that's SB. Now let's move on to lead. So if we go through and we write out our electron configuration for just neutral lead, we get this. When we need to remove, we're sitting in our P block. So we remove from our P block first. And that gives us what charge? Two electrons, we take those two electrons away. We have a two plus charge. So at that point, now we're sitting in our D block, we're sitting at like mercury. So how do we, can we remove more? Well, sure, we can remove these two S electrons without too much difficulty. Which leaves us with four plus. And it gives us this electron configuration. So all of these are going to end up with a difference of two. Because one set of the electron configurations, the lower one, is going to be if you just remove the P electrons. The second set comes into play when you remove the S electrons, which there's two of them, so it's always going to be off by two. And so that gives you these gaps. So up until now, we were just using this as a help to memorize. Now I could ask you to explain it and you could say, well, here, here's the electron configuration for one of them. Here's the electron configuration for another one of them. So you can see that we're always going to be removing these P electrons first, and then the S electrons. So there's always going to be a difference of two. So then why the difference charge between this column, this group, and this group? Well, they have a different number of P electrons, right? This only has one P electron. So you're going to get a plus one, and then when you remove the S electrons, you're going to get a plus three. How many P electrons do these have? Two. So you remove the two P electrons, and then you remove the two S electrons. These have three of them, so you're going to take away the three P electrons first, and then the two S electrons. So go through for the other four and write out all the electron configurations. It's good practice writing out electron configurations, and you can use, you can explain this. Okay, so now we have one more of these sort of periodic groupings to talk about. This is called the diagonal relationship. So this is something that sort of comes out of all of the other trends. And what happens here is, let's, so let's look at this part of the periodic table real quick. And you have this sort of diagonal trend going this way. Now, you may think back to questions that people said, well, can you ask me this? And maybe the question was, which one's more, has a higher electron affinity? Phosphorus or selenium? Because in one case, the trend goes this way, and in one case, the trend goes up. So it was these diagonals against the trends that I said, I can't ask you. I can't ask you which one has, which one's bigger, phosphorus or selenium. Because going down the periodic table, they get bigger. So that would say that selenium is bigger. But going across the periodic table this way, well, they get smaller, so selenium would be smaller. So according to one trend, it's bigger, and according to one trend, it's smaller. So which one is it? And I said, I don't know, I'd have to look it up. And I also had said that they'd be close, right? Because those two trends, they're kind of battling with each other. One's making it bigger within the other one, one's making it smaller than the other one. And so they're actually very similar. And that works for the other trends too, that works for ionization energy and electron affinity, all of the trends that have that same direction. So if we're talking as we go down the periodic table this way, that each time we're saying, okay, well, this one increases as this one decreases. What happens there is that they happen to be very similar. They have similar ionization energies, similar electron acidities, similar sizes, and so they end up with similar properties. So here's that in sort of pictures, figures from your book. That's what the figures from your book we're trying to get at. If we look here at our sizes, as we go this way, there's an obvious trend, right? And that's those trends that we've been talking about. Fluorine is our smallest, fluorine is our most electronegative. And then as we come down here, cesium would be our largest. It would be our least electronegative. So when you go this direction though, the diagonal this way, now it's harder to see. You look at this and this and this and they're pretty close to each other. This and this and this and they're pretty close to each other. Same thing as you go this way on the periodic table. That makes it so that you end up with really similar properties. So that's called the diagonal relationship. Now that's obviously very different from the fact that we have similar properties as we go straight down a group or straight down this group, right? That's based on electron configurations. That's based solely on, okay, you have this many electrons in each of your shells. This just comes out of the fact that, hey, these have similar numbers for all of these things that we've been talking about in this last quarter of the chapter. And so this is what the diagonal relationship is. Okay. So that pretty much wraps up chapter one. That kind of ends the first exam material and all of the sort of relationships that go along with it. So at this point, we'll end for the day. And then next time we'll start up with chapter two and starting to talk about how to draw Lewis structures and how bonding works and things of that sort.
Chem 1A is the first quarter of General Chemistry and covers the following topics: atomic structure; general properties of the elements; covalent, ionic, and metallic bonding; intermolecular forces; mass relationships. Index of Topics: 0:00:29 Examples of Ionic Radius 0:06:11 First Ionization Energy 0:18:53 Second and Third Ionization Energy 0:21:13 Electron Affinity 0:31:08 Electronegativity 0:38:07 Inert Pair Effects 0:43:16 Diagonal Relationships
10.5446/18971 (DOI)
So today we're going to be spending most of our time on quantum numbers and along with that electron configurations and energy level diagrams. So the way that we're going to start those by quantum numbers is because this sort of explains how we're going to build up our electron configurations. So there's a good analogy that goes along with this. It's not a perfect analogy and my fellow P. chemists may not love it a lot, but it works really good as sort of a starting point to explain how this comes to kind of give you an analogy for how this works. So the way that quantum numbers are is you're describing the distribution of electrons. So you're describing where you can find electrons at. The combination of these specifies what the wave function is. It specifies which one of these you're going to be using. Now you can think of this as sort of an address to an electron. It's not a perfect analogy, but it kind of works. So for instance, let's say we're trying to find a person on campus. And first I tell you go to building number 403. So that gets us to this building. Then I say it's in building 1100. So now I've narrowed it down and I've said, okay, we're into this room. So first got us to the building. Then I got us to the room number. Now I say row F. So now we're at five rows back. So we've narrowed it down to like 20 of you. Now I say chair one. So now I'm talking about that person right there. Now I say, are you male or female? That gets me to even more description. So I've sort of narrowed it down all the way starting from anybody on campus, which is quite a lot of people, down to a room number. So about 400 of you to a row, 10 of you, a chair, and one of you. And just an added descriptor. So this is kind of how you can think of quantum numbers. And all of those, those, these numbers are going to get you to the wave function, which at which point we will kind of stop there and do a lot with it. But that is what gets you there. So now how does this work with actual atoms? So the first quantum number, the principal quantum number, we've already talked about. We've been using this since the beginning of the quarter. This is N. So we've not called it a quantum number, but it is one. Now the next one, we've just sort of recently introduced and not talked about a lot. So remember back to when I said, we have these orbitals, we have S's and P's and D's. And I said that, OK, well, I named them S, P, and D. Well, these also have numbers associated with them. The S's are zero, the P's are one, the D's are two. And so that's what we have right here. Now notice for the N equals one level, you're only allowed to have L equals zero. And for the N equals two level, you're only allowed to have L equals zero and L is equal to one. Three, L is equal to zero, one and two. So that's L. Now the next one that we're going to have is M sub L. Now what M sub L does is it gets us to, right here, if I just say N equals two and P is equal, or excuse me, L is equal to one, so we're in the P orbitals, now I still have three of these. I'm still looking at three different P orbitals. And so how do we decide which one it is? Well, that's going to be what our M sub L does. It gets us to which axis we're on, which orbital we're on. And then the last one that we're going to have is M sub S. And that's going to be just one of two numbers. We'll talk about that one in a minute. So let's go through each one of these in detail first. So first of all, principal quantum number. This is the one that we already know about, that we've already been talking about. This one's N. This gets us to the energy shell. It's going to be an integer. Any positive integer, one, two, three, zero isn't a positive integer. In this case. So we saw this in the Bohr model as the energy levels. So this one we already have sort of a feel for. We know what the loud. We talked about energy levels up to like five. Technically they can be as high as we want. Now comes the one that we haven't really spent a lot of time on, the angular momentum quantum number. So this is the L that I've been talking about. And make sure you write this in script as an L so that you know it's an L. Once again, computer font gets us. This distinguishes the shape. So that's what I was talking about when I pointed to the picture that we had. And I said it distinguishes whether it's an S orbital, a P orbital, or a D orbital. These are going to have both values and letters. So if we have an S orbital, we have a zero. If we have a P orbital, we have a one. If we have a D orbital, we have a two. So these are associated with those orbitals we just talked about. Now there's rules for these. For N we had a rule two. It was just kind of simple so we didn't spend a lot of time focusing on it. It was any positive number. Anything from zero or excuse me, from one up till whatever. Now for L, L is dependent on N. So if we say that N is equal to one, L is allowed to equal N minus that. So L is allowed to equal zero. If we say that N is equal to four, L is allowed to equal three, two, and one. We say N is equal to six, L is allowed to equal five, four, three, two, one. So anytime that you have an N, L is allowed to go from zero all the way up until minus or N minus one. So we're going to kind of start building a table here and we're going to just add to it with each one we go through. So for each N, we have these L's associated with them. Now remember that since these L's have orbitals associated with them, letters associated with them, this also you could say N equals one is only allowed to have an S orbital. N equals two is only allowed to have an S and P orbital. N equals three, S, P, and D orbitals. N equals four, S, P, D, and F orbitals. So for the fourth energy shell, if I had, if this is a way I could ask a question, we could say well, what orbitals are present? Well S equals zero, P is the L equals one, D is L equals two, and F is L equals three. So for the fourth energy shell, you're allowed to have S, P, D, and F orbitals. Now we're going to move on to a magnetic quantum number. So we read this, the way we say this is M sub L, and when we get to S, we'll say M sub S. That's how you say it. Now this distinguishes the shape. Now this, whatever we're talking about, the M sub L, we're talking about things we've gone through, we've said it's in this energy level, we've said it's in the, whatever the L is, sub shell. And now this is going to have different orientations. So when we talked about this in terms of P, you know, one P is in this direction, one P is in this direction, and one P is in this direction, that M sub L designates which one we're talking about. You're allowed values of these? Go from negative L up into L. So our L is dependent on our N, and then our M sub L is dependent on our L. So let's go through and look at this. So for N equals 4, we're allowed to have L equals 0, 1, 2, and 3. And then M sub L is allowed to equal all the way from negative 3 up until 3. Now keep in mind though that this table may be a little bit misleading in the fact that this is only true if L equals 3, if we're in the F orbitals. If I tell you that we're in the N equals 4 level, but that L is equal to 1, then our M sub L would only be allowed to equal negative 1, 0, and 1. Because M sub L is always dependent on what L is. And so if I tell you that L equals 2, it doesn't matter which energy level we're in, all that matters is that I say L equals 2, which means that M sub L is allowed to equal negative 2, negative 1, 0, 1, and 2. So keep that in mind that if I say N equals 4, what is L sub L allowed to equal, then you can go ahead and list all these out. But if I define L as one of these lower numbers, you define M sub L based on that. OK, quantum numbers kind of laid out for you. So this is that same picture that I showed you. Only now we're going to go ahead and put in all our levels. So I said that we're in the N equals 3 level just to sort of give us something to start with so that we would have all of these orbitals. So let's go through and fill all this in. So now we have SPD and F, or excuse me, SPD and D orbitals. You've defined this as S, these as P's, and this is D. I've given you the M sub L values. You don't have to worry about which one's which. That doesn't matter at all. And we have our L values. We have our M sub L values. So this would be how you could relate it to those pictures. Now I wouldn't ask you to go through and draw this here. I'll show you how I'd ask these questions in a minute. But this is just to help you picture it. So now you can go back and look at those Hamiltonians again if you really want and spend some time seeing how those work. OK. Now there's one more quantum number we haven't talked about. And that is the M sub S. So you're going to say this M sub S. That's what I mean here is how it's read. Well this decides is the spin axis. So you can think of electrons as having a spin. And a lot of times we'll call these spin up or spin down. And you're allowed two values. And this is plus s or minus, excuse me, plus one half or minus one half. You're not allowed any other value except for that. So now what comes out of that is think about it and tell me how many electrons are allowed in each individual orbital. So our energy level got us to which energy shall we're in. Our prints are n. Our L got us to what orbital we're in. Or excuse me, what sub shall we're in. Our M sub L got us to what orbital we're in. And now within one orbital we're only allowed to have values of plus one half or minus one half. And you're never allowed to have all of your values be the same. So what this means, all of your quantum numbers be the same. So this means that you're only allowed two electrons in each orbital. Because one can be spin up, one can be spin down. They can't both be spin up. They can't both be spin down. And you can't cram like four of them in there and have half of them spin up, half of them spin down. You have to only have one of each. And within any given atom they can only have one set of quantum numbers per electron. OK, now let's go through and I'm going to give you a set of quantum numbers. And you're going to tell me whether it's allowed or not and why. So the first one. I'm going to give you a second to look at it. So if n equals one, that's allowed. That's not a problem. Now we have L equals one. Well theoretically while that's not like disallowed in general, if n is equal to one, the only thing that L is allowed to equal is zero. And so we can't have L equals to one. So this is going to be wrong. All the rest of them would be OK otherwise. If this instead had said L equals two, we'd be fine. So we're not allowed to have that. Let's look at the next one. So we've added is equal to three. It's not a problem. We have L is equal to one. It's fine because we're allowed to have L equals to zero, one, or two in this case because n is equal to three. And so L equals negative two. So that one's going to be allowed. Yeah, that one's not going to be allowed at all, right? Because if L is equal to one, n sub L is only allowed to equal negative one, zero, and one. You might be saying, but wait, n is equal to three, so it's OK. Well, sure, n is equal to three, and L was equal to two. Then you would be allowed to have this. But I didn't tell you that L is equal to two. I said that we're in the p orbitals. We're not in the d orbitals. So even though this energy shell has the d orbitals, it has L equal to two. We're not talking about that. We're talking with the p orbitals. And so this isn't allowed. OK, next one. N equals to two. We're OK there. L is equal to one. And we're OK there. M sub L is equal to zero. That's fine. Now I have M sub S is equal to one. That's obviously not allowed, right? M sub S is only allowed to equal plus one half or minus one half, never anything else. So that one's disallowed right away because of the M sub S value. So one more to do. So we have N is equal to three. So we're OK there. L is equal to two, which means we're in the d shell, or sub-shell. So we're OK there. That's allowed. M sub L is equal to minus two. That's fine. And M sub S is equal to minus one half. And that's fine too. So we're set on that one. OK. So now that we've sort of set up our quantum numbers, we've set up the way that we can describe which sub-shells and energy shells and how many electrons we have, we've set everything up so that we can figure out exactly where the electrons are. Now we just need to actually kind of lay this out and figure out our ordering and all of that. To do that, we have to talk a little bit about where these are in regards to their energy levels. And how close they are to the nucleus. So the way that we go about doing this and the terminology that we use for this is the two main terms. Shielding is going to come up a lot in this section. Penetration comes up just for the sake of describing how close the different orbitals are to the nucleus. So this is sort of the technical definition for it. And what it says is orbitals with radial probability closer to the nucleus are the probability closer to the nucleus are more penetrating. So let's break that down a little bit. Orbitals with radial probability. What is radial probability? Well that says what distance away is the electrons most likely to be? Are the electrons most likely to be here with a small radial probability or here, the larger radial probability? So all this means is that orbitals that have a probability of being closer to the nucleus are closer to the nucleus. Not so bad when it's broken down like that. The terminology for it is just that. Now the closer to the orbital, the closer to the nucleus that the orbitals are, those are going to shield the outside electrons. So there's a way of thinking about this that kind of works. Now we're dealing with magnets, right? So let's say we take a nucleus or we take a positive magnet of some sort and we put it down and we put a bunch of paperclips around it. And those paperclips are going to stick to the magnet, right? Now we put another layer of paperclips around. And those are probably going to stick too because magnets are strong and it can go through the paperclips. And then we put another layer of paperclips. Okay, those, that layer of paperclips might stick. Put another layer of paperclips. Well why are the outside paperclips eventually not going to stick anymore? It's because those inside ones are blocking it, right? It's keeping the outside ones from feeling the magnet. This is no different. This is the way that shielding works. The 1s electrons are blocking the 3s electrons from feeling the nucleus. If your nucleus is like right here and pretty much all of your electron density from your 1s is here, 2s is here, well those are in the way of the 3s electrons. The 3s electrons aren't going to feel the nucleus as much. And that's called shielding. So these are shielding the 3s electrons. The 3s electrons in this picture are the most shielded. They are shielded from the nucleus by these electrons. Now that's how you use that term. So in this sort of picture we could say that 1s is the most penetrating and then 2s and then 3s. And we can say that 3s is the most shielded and that 1s is shielding 2s. 1s is shielding 3s. So that's sort of the convoluted terminology here that we want to get through. Okay. So with all of that terminology of quantum numbers and descriptions in the back draft, now we can actually draw some energy level diagrams. Okay. So we have two of these drawn out. And we have one for hydrogen and I suppose I should say hydrogen like ions as well. So helium one plus, lithium two plus and so on. And these are with one electron. And here I have the energies of everything else. So we have these all drawn out for you. And there's going to be an easier way to do this rather than drawing them out every single time using the periodic table. But for now we just want to be able to see it all written out. So we have our 1s here and then our energy level 2, our energy level 3, 4 and 5. So notice these are all at the same energy levels. These are not. So when I say what's the difference between the two, that's sort of the big one, right? So these are all in the same energy level. They're all completely across in energies. We call that degenerate. If something has the same energy across an entire energy shell, these are degenerate. So there's a few different ways we can use this terminology as well. We can say here that the 2s and 2p levels are degenerate. They have the same energy. Over here there's degenerate orbitals too. It's just not within an energy level. So 2s and 2p, those are degenerate anymore, right? They're not the same energy level. The 2p's are still degenerate with themselves. Okay. So this is going to be how we can represent where our electrons are. Right now I don't have any electrons drawn in. We'll do that in a minute. But this is kind of how we're going to start. This is what we call an energy level diagram. Now let's look at this last question I have for you. Here I have drawn hydrogen with atomic orbitals higher than 1. Is that correct? So how many electrons does hydrogen have? It only has one electron. Now if we put one electron in here, that means that our 1s orbital is filled. So a common mistake on the exam is I'll ask you to draw atoms up into a certain level and everyone stops with the hydrogen at 1s and they say, well they don't have any more orbitals than that. Now we know that it has to have more orbitals than that because we know it has more energy shells, right? What have we been doing up until this last week or so in class? Think about what we were doing with the Rydberg equation. We were taking a hydrogen atom and we were taking an electron from the energy 1 shell. We were promoting it up into the energy 4 or 5 shell and then back down again. So we know that it has to have higher energy levels. Remember we didn't really talk about S and P and D orbitals then but we know it has higher energy levels which means it has all of these. They're just not filled in general. The only time they're filled is if we excite the atom. But that doesn't mean that they don't exist. So yes, this is still fine. Just because they're not filled doesn't mean they don't exist. Okay. Now we're going to draw some of these and we're going to look at some of these. Okay. So let's look at these and we'll look at a wrong version and a right version. So here we have an electron diagram for carbon. If I go to draw it like this and I fill these two, that's wrong. Now this kind of goes into some rules here. So we have poly exclusion principle and Hunn's rule. So what the poly exclusion principle says is that no electrons can have all quantum numbers the same. We've already kind of alluded to this because we had to. And we alluded to that when we said that there's only two electrons per orbital because one has to be spin up and one has to be spin down. One has to be M sub s plus one half. One has to be M sub s minus one half. And what the poly exclusion principle is the thing that tells us that. It says that if M is the same, that's okay. If L is the same, that's okay. If M sub L is okay, that's the same. And M sub s is okay, it's the same, but they can't all four be the same. All of the electrons have to have one of those numbers that's a little bit different. So what that effectively means is that you have to have in each orbital, you can have a spin up and a spin down. Not two spin ups, not two spin downs, and you're okay. And of course you could have a spin up here and a spin up here, that's not a problem. Because their L number is different. You could have a spin up here and a spin up here and that's fine. Even though their L number is the same, they're both s, because their n's are different. So one of those quantum numbers has to be different. Filling in the electrons in order of energy levels is important. You're going to start at the low energy level and work your way up, which we did here. We still haven't really broken any rules yet on this one. And yet this one's not going to be right. Each orbital holds two electrons. That comes from the poly exclusion principle. That's what we just talked about. Now comes Hans-Ruhl. We fill across the generate energy levels before you fill in orbital. And that's why this one's wrong. You have to go across the energy level first and then you can come back and double up. So that one's not right because of Hans-Ruhl. This would be how you have to do it. You'd have to go one right here and one right here. And if we were talking about nitrogen, which has one extra electron, we could go another one right here. Only then, once you filled across the p orbitals, do you come back and start going down. Okay. Now let's write an electron configuration for cobalt. So we find where cobalt is in the periodic table. So starting probably about 10 minutes ago and up until the first midterm, you're really going to want to just bring a periodic table with you and have it out with you at all times while we're going through this. So find cobalt on your periodic table and figure out where you are and figure out how many electrons we have. Okay. So we fill in all the electrons, filling across the row first. And we just keep filling until they're all gone. And at that point, we have the right number of electrons for cobalt. Now we can write the electron configuration for it. And this is what we do for the electron configuration. We basically just take and we write up each one of these orbitals with how many electrons are in them. So this has two. So it's 1s2. This has two. So it's 2s2. P, and there's six of them. So 2p6 and so on and so forth all the way up. Now you can imagine if you look down at the bottom of your periodic table, like rubidium or something, that this is going to get a little tedious to write. And so we have a shorthand notation for it. And that's to say, well, we know what the noble gas close to it looks like. The only thing that really changes is the outside electrons. So let's draw and just call the nearest noble gas. We're just going to put the nearest noble gas here. And then we're going to fill in the valence electrons right here so that this is the only part that's really changing anyways. So this gets us up to the noble gas. So on your periodic table that you have sitting in front of you, find argon. And then count your way up until cobalt. And this gets you there. OK. Now I've been talking about this looking at the periodic table thing. We've been using that to count electrons. But this is going to get a little tedious to try to draw out every time you need to write an electron configuration. If I give you three, four electron configurations on an exam, you don't want to have to draw all these out. So there's a much easier way to do it. And that's to look at these in relation to the periodic table. So this is a periodic table with the F orbitals where they belong. And you can count your way through to any electron configuration you need using the periodic table and just sort of pointing through. So for instance, let's look at the example that we just did. So we have cobalt right here. And if we go back and we start at argon because it's easier to start at your last noble gas, you can start at argon. And then you come over here and you say, OK, well, I'm in the four energy level, four s. Now the thing to be careful of is your d's. They start at three. So 3d, 1, 2, 3, 4, 5, 6, 7, 3d7. And you're done without having to write out that big table and things. So let's look at carbon. We've kind of done the electron configuration for carbon, too, drawing it into those charts that I gave you. Now we can do it with the periodic table. So let's go ahead and start from scratch. Let's not start from the last noble gas, even though we could. So we'll start with 1s. And we'll say 1s12. So 1s2. 2s2. 2p12. So 2p2. If we were looking at phosphorus, now let's start from the last noble gas. So we'll start from neon. We'll say we have neon here, and then we come down here, and it's 3s2, 3p3. And now we have phosphorus. So you can use the periodic table to just sort of point to an element, go back to the last noble gas, and then work your way through until you get to the element that you need. And that'll save you a lot of time, so you don't have to draw out the energy level diagrams unless I ask you to. OK, so a few more definitions for you. So diamagnetism versus paramagnetism, in what I kind of believe is one of the worst named systems of all time. So paramagnetic atoms, or ions, or molecules, whatever, have unpaired electrons. So paramagnetic, any time you have unpaired electrons, that's going to be considered paramagnetic. Now the interesting physical aspect of paramagnets, or paramagnetic molecules, is that they're drawn toward magnets. Now diamagnetism, those are all paired electrons. So if you have a single unpaired electron, it's automatically paramagnetic. And it doesn't matter how many more unpaired electrons you have, if you have at least one, it's paramagnetic. To be diamagnetic, every one of your electrons must be paired. And so the diamagnetic molecules, they're going to just be a little, little, little, little, but tiny bit repelled by magnets, but not very much. Really the main physical interpretation here is that these will be drawn toward the magnets. So for carbon, if you draw this out, you get a paramagnetic molecule, nian, you get diamagnetic. Now really the easiest way to see this is to actually go ahead and draw out the energy level diagrams. You can do it without drawing them out, but sometimes it's a little bit hard. Mostly because if you think about what the electron configuration for carbon looks like, if we just wrote this out, it's 1s2, 2s2, 2p2. If you just see 2p2, there's this really strong temptation to be like, oh, there's two of them, okay, they're paired and moving on. But that doesn't, that's not right, right? You need to actually see that they're 1, 2 because we fell across the energy levels first, and most of you know that. You would know if I asked you to draw this to draw one here and one here. It's just there's always that temptation when you see an even number to automatically say that they're paired even when they aren't. So it's a good idea to draw these out. Okay. So now that we have that sort of definition taken care of, let's keep going with our electron configurations. So we've done ones of atoms so far. We drew out a few of them. We drew out a few of the energy level diagrams, and we looked at the periodic table, and we saw how we could just do it with the periodic table. Now let's look at what we're going to do for ions. So for ions, you're just going to add or subtract, depending of course on what kind of ion it is. If it's a cation, you have a positive charge, so you're going to do what? Take away electrons or add electrons. It's positive, so that means you're taking away electrons. If you were to have an anion and negative, then you're going to need to add electrons. Okay. So let's start with calcium. So if we draw out the electron energy diagram of calcium, this is what we get, which gives us an electron configuration of this. So make sure that you can kind of do that on your own too. Now if we want to turn this calcium from a neutral calcium, which we have here, into a calcium 2 plus, what do we need to do? We need to take away two electrons. So you're going to want to take them away in the opposite order that you add them, right? You always add low to high. You take away from the high ones on your way down. You always are going to take away from the outside shell, which makes sense if you're going to try to steal an electron and you have all these electrons around, you're going to pick one off from the outside or one from the middle. You're obviously going to pick one from the outside. So you take from this 4S. So you take away the 4S electrons and you're left with this, which gives you this electron configuration. Okay. So that's for how you do cations or positively charged. Let's do an anion now. So here we have neutral scloring and I've drawn out both the energy level diagram. And now something that I kind of cut out of these energy level diagrams because I ran out of room is the line that says E here. That is important. This is an energy level diagram. It's kind of like a graph. You always want to have an axis. So make sure on any sort of exam or anything of that sort, you draw that like I did in that first couple of slides with where I drew everything out really nicely. Okay. So we have this electron configuration. You could just get that off the periodic table. You wouldn't have to draw this all out. And now we need to make a chlorine 1 minus. So it's a minus 1 charge. So do we need to add or subtract an electron? We need to add one. So we go ahead and add one in here to give us this, which then changes our electron configuration to be P6, P6. So those are sort of the normal ones. Now things start getting a little bit weird when we get into the D block. So we're going to spend some time talking about that. So half filled and fully filled orbitals are going to be more stable. Now when you're in your P block, there's not a lot that we can really do about this. But it will affect some of our periodic trends that we'll talk about a few days from now. But it's not going to affect our electron configurations. However, when we're in the D block, it's going to. And that's, if you look at this, I mean granted this isn't to scale, obviously. But I did try to draw, try to make it a little bit obvious that your 3D in your 4S energies, or therefore your 5S in your 4D, they're very, very close to each other. The 4S 3D levels, they're so close to each other that there can be a little bit of exchanging going on. You don't necessarily have to follow all the rules. And sometimes it's more energetically favorable to break them. And this is where the half filled and fully filled exceptions are going to come in. So when your valence D orbitals have electrons and you need to make a positive ion, don't take from your D orbitals. You're going to always take from your S orbitals. So this actually isn't, this is really more just a rule than an exception. Whenever you're making a cation and you're in that D block of the periodic table, take from your S orbitals first. So always remove from your S orbitals. And we'll do an example of that in a minute. Now F is going to have a ton of exceptions, the F block. If you're bored and you want to look at them, the book has a list of all of them, as well as Wikipedia lists all of them out for you too that you can go kind of look. Don't worry about those. If I ask you about the F block, I'll pick one that follows the rules. So I kind of already spoiled this question for you. Why can we be so fluid with our exchange of electrons from the S and D orbitals? Once again, it's because they're so close. They're so close with their energy levels that even though D is higher, you're going to take from the S block. And in just a minute, we're going to see that there's some that we can promote a selectrons up to the D block in order to get half filled and fully filled shells. So we're going to talk about examples of this and this. So these are your two big, big things that we have to worry about. And this only is going to, you only have to worry about when you're in your D block for now. Your P orbitals, when you're in your P block and you're talking about atoms that are in your P block, don't worry about it. Just worry about your D block. So let's do some examples. So we're going to take, we're going to look at these two rows or columns on the periodic table because this is where that first exception comes in. The fact that you can take electrons from your S block, move them up into your D orbitals, just to make this half filled and this fully filled. So these are going to be where our main exceptions there take place. Okay. So our starting point for this one, I figure we'll talk about the easier of the exception rules first. So we have cobalt to cobalt 2 plus. So we've already drawn out the one for cobalt. We did this in an earlier slide. So we have this drawn out, we have this drawn out and we need to take two electrons away. So where are we going to take them from? We take them from our S block first. Okay. And so we get this. So this is sort of our rule here on you take from the S block first. And again, rather than being an exception, it's really more of a rule. So just make sure you always do that. And we'll sort of leave that one where it is. If you're in the D block and you have to remove electrons to make an ion, take away from the S block first. Now let's go on to the harder exception. This half filled, fully filled, sub shall issue. So let's take neutral chromium. So neutral chromium on its own looks a little weird or a little different from normal. And then when we go to take our one away, we'll do the same thing that we just did. So first of all, we need to figure out what neutral chromium looks like. So if I were to just ask you to draw it based on the periodic table, this is probably what you would draw. But this isn't right. The reason is because this we can make half filled. We can make half filled by moving this up and then it'll be more stable. So let's do that. So we move that up. And so this is what neutral chromium actually looks like. If you were to draw this, you'd be wrong. This would be wrong on an exam. This is what you would have to draw. So neutral chromium looks like this. And if you look at your periodic table, it follows down the column too. Now let's say I also say I want to know what 1 plus chromium 1 plus looks like. So this is where we have to take from here. And so we take that one away. And that gives us chromium 1 plus. And this is why chromium likes to form a 1 plus ion. Because its neutral electron configuration looks like this. And so this is kind of at a relatively stable point. And yet you have this one electron here that's off on its own. And that's not overly stable. And so by taking it away, you've made the molecule closer to an ideal gas configuration. OK. So now let's do copper. OK. So if we look at copper and I tell you, draw the electron configuration using the periodic table like we normally do, this is probably what you would come up with. Now in the same way that half filled subshells are relatively stable, so are fully filled subshells. And getting this D to be fully filled adds a lot of stability. So if we take one of those electrons and move it up, this is what neutral copper will look like. So if I ask you for the electron configuration of neutral copper, this is what I would expect you to draw. This is what gives you your points. This is not correct. You have to promote the one electron up. So now given that, why would copper want to form a plus one electron, or a plus one ion? Well, we have one electron here off on its own. That's not very stable. And so it can just remove that, making it a little bit closer to an ideal gas, or a noble gas configuration. OK. So now, using the logic that we've just done, so we've already worked out copper and cobalt, or excuse me, chromium, now let's go through and think about why are these going to always form plus one ions? Well, this whole row, or column, and this whole column, they follow the same sort of electron configuration pattern, where you have either an almost half filled, it's just missing one, or an almost fully filled, it's just missing one pattern. So you promote, in all cases, one electron up. And in all cases, you're left with this system where you only have one electron in the outside shell. So just take it away. It gives you a plus one ion. So a good going home and working some things out exercise would be to go through and write the electron configurations of all of these, in both the neutral form and in the ionic form, and prove to yourself, in each case, that you're going to have a system very similar to this, where you need to remove this electron. But it would be good practice to go write them all. Okay. So now that we have electron configurations, and we know how to write electron configurations, we can go on to something, we can start moving on to more bulk properties. What do molecules look like is what our eventual goal is, and how do they act? The sort of stepping stone that we have to get through here is, what sort of properties do atoms in general have? And the easiest way to look at these is to look at the periodic table on a whole. Because it's hard to go through and say, okay, what does each atom in particular have? That's a lot of numbers and a lot of memorization. It's not a lot of use. It's not useful. But if we can look at the periodic table as a whole, and we can go through and we can see, well, what sort of trends can we see? Can we see what happens to the radius or the size? Can we see what happens if we have an atom as opposed to an ion? How that affects the size? Can we see what happens to ionization energy and electron affinity and electronegativity? Can we see all of that? And can we see all of that just by looking at the periodic table? And what we're going to see is in a lot of cases, we can. There are some exceptions and sometimes where it doesn't really work, but on the whole, we can. And that works out really nice for us. So in this sort of section of the chapter, because this is a very, very long chapter, we're going to move on and do a little bit of history on the periodic table development. And then we're going to talk about all the trends. And you're going to be expected to be able to compare and predict the ordering. If I give you a bunch of atoms, you'll need to be able to tell me the ordering for all of these by the time we're done. You'll notice I have electronegativity in here. That's not in your book covered until chapter two. And that's because you need to know a little bit about bonding to be able to do it. But you also need to know a little bit about bonding to be in this class. So we're going to talk about electronegativity here, because I think it fits in a little bit better. OK. So first, some history. So how has the periodic table actually developed? So now, if we look at the periodic table, we know that it's an order of atomic number. Well, that wasn't something that they knew right away. What happened right away is when Mendeleev looked at it and said, well, as these masses increase, there's definitely some characteristics. And in general, if you put them in order of their masses, you get these sort of trends that happen, where all of these elements have similar properties. And all of these elements have similar properties. And so if you put them in order of atomic mass, it works pretty well. The problem is that they knew there were some exceptions to this. They knew that there was a few places where it didn't really work. So they just kind of manually switched it around and said, OK, well, we know that this really should be here and this really should be here. And then later on, what happened is they figured out that, oh, it's not actually mass that's deciding their properties. It's the atomic number that's deciding the properties. And that's what led to the more modern periodic table that we have. So Mendeleev's was actually a bit different. So then they were able to build up and sort of make a periodic table before they even knew all of the elements. I mean, some of the elements weren't discovered until pretty recently. You look at sort of this box, you know, they were, 1923 to 1961, that's pretty recent. So they were actually able to predict that some of these elements should be there, even though they hadn't figured them out yet and based just on the fact that they were missing some. So with that work, this is sort of an outline of what we're going to do for the chapter. We're going to talk about each one of these individually. This is sort of a meaf adding on to some table that you can find in Wikipedia. And I added these in because I want to talk about these as well. So these are what we're going to go through in detail and talk about each one individually and discuss how the periodic table, where they are in the periodic table relates to them and how we can rank them. Now, it's not going to be based solely on where they are in the periodic table. We're going to have to remember our electron configurations too. So it's going to be based on where they are in the periodic table and in many cases, their electron configurations. So before we talk about any of the other ones, we need to talk about effective nuclear charge. And the reason for that is that this is the reason behind almost all the other trends, at least side to side. So effective nuclear charge increases from left to right. So again, pulling out your periodic table, keeping it handy. If you go from left to right on the periodic table, it gets bigger. We're not really going to discuss the vertical trend. So what this actually is, is this the amount of charge that an electron feels. So you can think of that as you have this magnet in the center and you have the electrons going around. It's the amount of charge that the electron going around feels from the nucleus. And this changes based on where you are left to right on the periodic table. Now the reason why this happens increases left to right and why we're not going to talk about it going down the periodic table is the way that shielding works. So remember we had that slide on what shielding was and I talked about it as you have this magnet and you put a bunch of paperclips around it, you put a bunch more paperclips around it, and eventually the paperclips won't stick anymore. And that's because all of the paperclips that are already on it are shielding the outside ones from feeling the pull of the magnet. This is similar, although now the paperclips are repelling each other too, so it's even a little bit more pronounced. But as you go down the periodic table what happens is that you have this electrons and electrons and those outside electrons are farther away and the inside electrons are blocking them from feeling the nucleus. So each layer has its own effective nuclear charge. So that's going down the periodic table. Now as we go across the periodic table something different happens. Now you're adding a proton to the nucleus each time you step across the periodic table. And you're adding an electron too, but you're not adding another energy shell, you're not adding any shielding. So all you're really doing here is you're making the magnet stronger. You're adding more and more protons, you're adding more and more charge. And so as you add each electron, as long as you don't add another energy shell, it's not further away, it's not more shielded, and so you just have a stronger magnet. And so the amount of pull that the electrons feel is stronger. And that's what effective nuclear charge is, and that's what all the other trends that we're going to talk about is based on. So our sort of last couple of things for the day is we're going to talk about sizes. We're going to talk about atomic radius and ionic radius. So atomic radius is going to be based on the effective nuclear charge as you go across the periodic table, and then how many sub-shells you have as you go down the periodic table. So let's talk about going down of the periodic table first, because I think that's the easier one to understand. So you're adding more electrons and you're adding more energy shells. So each time you add an energy shell, you're adding a whole another layer of electrons. And so it's going to get bigger. So each time you add another layer, you add a bunch of size. Now on top of it, these ones in the 3s or 3p or 4s or 5s, they're actually being shielded from the nuclei by all those other electrons that are on the inside shells. So one, it's just that you have way more electrons and way more energy shells. And two, there's shielding going on. So they're further away from the nucleus because their orbitals are further away. And they're shielding happening. Now comes the one that's a little stranger to talk about, the left to right. So as you go across the periodic table this way, it gets smaller, not bigger. So you add another proton, you add another electron, and yet it gets smaller. So that one's a little strange. So why do we think that would be? Well, let's think about our effective nuclear charge. What happens to our effective nuclear charge as we go that way across the periodic table? It gets bigger, right? Because you're taking that magnet and you're making it stronger. And so it's holding onto those electrons tighter. It's pulling them in closer. And all of our size from an atom is made up of the electrons. So if the electrons are being pulled into the nucleus with more force and being held tighter to the nucleus, it's going to get smaller because they're being held closer. So even though you're adding an electron, you're not adding another energy shell. So it's not like you're adding one to the outside. And your effective nuclear charge is increasing. And so it's pulling those electrons in. And if any of you have checked out my exams, I always ask for any of these, why? So make sure you know the reasons, not just the trend. Don't go to that first slide and say, I'm just going to memorize this and be done with it. Know the reasons. Okay, let's do some examples. So I have a little mini periodic table here. Although once again, I suggest having your periodic table out while we do all of these lectures. So we're going to rank the following in order of increasing radius. So I have lithium, carbon, and fluorine. So we have lithium, carbon, and fluorine. So we're going across the periodic table. Now the smallest one is going to be on this side of the periodic table, the biggest one on this side. So it's got to be fluorine, carbon, and then lithium. Okay, now let's see if you can do this one. Let's see if you can beat me to it. We have lithium, potassium, and rubidium. So our smallest is going to be near the top of the periodic table or the bottom of the periodic table. Yeah, the bottom. So we have lithium, and then potassium, and then rubidium. Now a tricky one. Rubidium, we have selenium here, and we have fluorine. You may say, well, is that a diagonal? How do I know what to do? Well, your smallest one is going to be near the top. So that says that it's going to be fluorine. And your smallest one going this direction should be the one over here. And so that says it's going to be fluorine too. So according to all of the rules, it's going to be fluorine. And then our next one is going to be selenium, because that's the furthest up and the furthest to the right. And then barium. So that's something I could ask you. Sure, it's out of diagonal, but I can ask you that. It's fair game for an exam. Okay, now let's look at this one. Brillium, aluminum, or geranium. Now this comes up a lot in a couple of weeks when everyone's studying for their exam. So how do I know if it's going on this diagonal? And all this, all honestly, you really don't. And the reason is it's going to be really, really close. So what happens here is that as you go down the periodic table, it's getting bigger, right? So geranium, according to going down the periodic table rules, geranium should be your biggest and beryllium should be your smallest. According to going across the periodic table, geranium should be your smallest and beryllium should be your biggest. So which one wins? Going across the periodic table or going down the periodic table? In reality, they sort of cancel each other out. So they end up being very similar. And they end up having very similar sizes. And we're going to talk about this near the end of this chapter. And this actually gives rise to a whole new trend called the diagonal trend where things going along this diagonal that crosses, that switches the boundary, it's the opposite diagonal. They have very similar properties. So this isn't something that I'd ask you to do. Because in reality, they're close and you'd have to look them up to see what the differences are going to be. However, this diagonal, so it's just beryllium, selenium and fluorine, that's fair game because it both follows the trend. Fluorine should be the smallest, both according to the vertical and the horizontal trend. Beryllium should be the biggest, both according to the vertical and the horizontal trend. Okay, now our last thing, size-wise, ionic radius. So cations, they're always going to be smaller than their neutral components. The more positive, the smaller they are. So copper would be smaller than, or copper 2 plus is going to be smaller than copper 1 plus. That's going to be smaller than neutral copper. Anions are always going to be bigger. So S2 minus is going to be larger than Cl minus. They both have the same number of electrons. If you look at them on the periodic table, you count electrons, they're going to have the same number. Yet, sulfur is going to be bigger. Higher affected nuclear charge, this is the reasoning. So this little blip is the reasoning behind the rest of it. Think about what happens when you add or remove an electron. So let's first talk about this. Let's take away an electron. What's going to happen? Well, the copper is going to be pulling the electrons closer, right? Because now, instead of having a 1 to 1 ratio, we have an extra proton per electron, or excuse me, an extra proton for all the electrons. So instead of being 1 to 1, now we have more positive charge, which means that it's going to pull the electrons a little bit closer to itself. It is going to make it smaller. Now if we take away another one, so we have copper 2 plus instead of copper 1 plus, now there's even more positive charge per negative charge. It pulls it in just a little bit closer. If we look at the negative ions, the exact opposite happens. We have this extra amount of negative being added to it. So the nucleus now has extra electrons it has to hold on to, and it can't hold on to them as well. And it has another electron that it's trying to add on that it can't hold on to as well. And so the more negative charge you have, the less effective nuclear charge there is. Because it can't, it's being split over more electrons. So let's say we take these five different electrons, or these five different atoms, and I say let's try to figure out what they are. So let's look and try to figure out which ones would be the copper ions, and which one would be the O2 minus atom. We're filming for the moment, so if you could please leave. We're filming here, so we have the room till 11. So we have two oxygens. So let's say we look and we say well which one of these electrons would be O minus and O2 minus, and which one of these atoms would be copper, copper 1 plus, and copper 2 plus. So if we were to line all these up, we could look at these, and we could see, well the biggest one would be O2 minus, and then O minus, and then copper, and copper 1 plus, and then copper 2 plus. Okay, let's do an example now where we have to rank all of these. Draw an arrow from the smallest to largest species, and the following isoelectronic species, series. So now I've introduced a new word for us too, isoelectronic. So I have all of these listed now. Let's figure out what isoelectronic means. So we have iso. What does iso mean? Think to your biology classes. You've maybe heard iso-tonic. That comes up a lot. Iso means same, or isosceles, if you think geometry, that's your isosceles triangles, right? So iso is same, and then the electronic, well now we're talking about electrons, so we have same electrons. So if you look at these, and you count how many electrons each of these have, they all have the same number of electrons, so that's what isoelectronic means. And so we draw the arrow this way, because all of these have the exact same number of electrons, yet sulfur 2- is going to have a lot more electrons per proton, right? Because now this has a 2-. There's not as many protons in this to hold on to the number of electrons. Meanwhile here, you have a lot more protons to hang on to all of those electrons, which is why you have a positive charge here. So it's got to be down this way. So next class we'll go through, and we'll do a bunch of examples with this, and we'll rank some ions, and we'll figure out, you know, we'll do some more matching ions to their pictures and things of this sort, and we'll talk about pretty much the rest of the periodic trends, but we'll end here for today.
Chem 1A is the first quarter of General Chemistry and covers the following topics: atomic structure; general properties of the elements; covalent, ionic, and metallic bonding; intermolecular forces; mass relationships. Index of Topics: 0:00:16 Quantum Numbers - Introduction 0:03:56 Principle Quantum Number 0:04:30 Angular Momentum Quantum Number 0:07:01 Magnetic Quantum Number 0:09:09 Filling in Quantum Numbers 0:15:05 Shielding and Penetration 0:18:00 Energies of Orbitals 0:21:12 Electron Configuration 0:26:11 Energies in Relation to Periodic Table 0:28:07 Diamagnetism vs Paramagnetism 0:30:15 Electron Configurations of Ions 0:33:14 Electron Configuration Exception 0:36:05 Examples of Configuration 0:40:30 Discussion Question 0:42:25 Periodic Trends 0:43:37 Development of the Periodic Table 0:45:24 Periodic Table Outline 0:46:17 Effective Nuclear Charge 0:48:47 Atomic Radius 0:54:39 Ionic Radius
10.5446/18966 (DOI)
Okay. We're going to give enzymes another try today. So there's a mistake in the way I calculated the how am I doing scores. And the mistake is that if you missed a quiz, not if you got a zero, but if you missed a quiz, then it was calculating your how am I doing score incorrectly. So a few of you pointed this out to me and I fixed it. And so I posted a new how am I doing score. On Friday, it's separate from the earlier one. And so if you missed a quiz, your how am I doing score should be higher than it was on Thursday. All right. Just check that if you would. Thank goodness a few of you went to the trouble to check your how am I doing score and found this error. Okay. Quiz seven scores I think are already posted. All right. So that should be all of the quizzes for the quarter. You should see all the scores. The key is posted as well on the results page. Quiz six scores have been updated. We made an error in the way we graded that. Excuse me. Completely my fault. There was an error in the notation that we used on the quiz. And so G and Mark went through and regraded all of these and reposted them. And so the quiz scores that are up there for quiz six are updated. If you want to double check your quiz score, that would be a good thing to do. Now, the electronic evaluations got turned on last week. And a few of you have worked them out for me. But so far just 16% of you. I need, and there's no way I can force you to do this, but I want to ask you if you would please to take the course evaluation. Let me explain the situation as follows. I teach general chemistry, analytical chemistry, physical chemistry. When I teach general chemistry and I get evaluations from the students, they're almost useless. And the reason is it's not really their fault. I mean these students right here, they don't know if the drill instructor is a good guy, a bad guy, or an intermediate guy. They just know they can't hold their feet up for 30 seconds. All right. They have no experience with drill instructors. And so when I get feedback from them, you know, I can feel like I've done the best job in the world of teaching this class. And I will get all of this extraneous feedback, you know, your pants are something wrong with your pants. You guys are like these guys right here. All right. You're grizzled veterans. Most of you are graduating in a week. And you've seen everything. You've seen bad teaching. You've seen good teaching. You've seen intermediate teaching. You're calibrated. You know what exactly, what issues exist with the class. All right. I know there's issues with this class. And it would really help me out if you went through and did these evaluations. And not only just scoring them, because I know that's the easiest thing, because that's just radio buttons. But if you actually wrote some comments to help us make the course better. I read every one of these comments. I get a printout of a list of all the comments. And I read those all. All right. I don't get to see them for about two weeks, because they don't allow me to see what your comments are before I issue the final grades. But after I issue the final grades, a week after that, they let me see what your comments are. Okay. So please, if you would, I know it takes time. It probably takes about 10 minutes, 15 minutes to do this. I'd appreciate it. All right. So we're going to talk about enzyme kinetics. Turns out all of this stuff is in Chapter 21 of your book. All right. In a chapter that's called catalysis. So what we want to do, and once again, all quarter we've been doing this. We've been cherry picking certain topics from stat, mech, thermal, kinetics that are the most important topics, I think. And so this is one of those topics. The basic idea is we want to understand how enzymes catalyze reactions. Now we're not really learning how enzymes do this. We're studying the phenomenology of enzyme catalysis. In other words, we're looking at the rates of enzyme reactions. We're trying to understand how we can break the mechanism of enzyme substrate catalysis down, right, and turn it into a modular thing that we can assign constants to and make measurements on and compare enzymes against one another and so forth. All right. We're trying to really understand the phenomenology of enzyme substrate catalysis. So we've got an enzyme, we've got a substrate. When the substrate docks in the enzyme, what this schematic diagram here is trying to depict is that there is a recognition event that has to occur. In other words, the enzyme is not going to catalyze this reaction, whatever it is, for any substrate. There has to be a recognition of the substrate by the enzyme at the active site in order for the substrate to dock. And once this docking occurs, then the enzymatic reaction can proceed. And in this case, it looks like some sort of bond breaking reaction occurs. Okay. So this enzyme substrate complex is meant to represent this entity here, all right, and then products are produced and released from the enzyme at that point. Once this reaction occurs, the affinity of the products for the enzyme is lower than the affinity of the substrate for the enzyme. If that was not true, the products would just stay bound to the enzyme and it would be game over. Okay. So the enzyme has to release the products once they're formed in the active site. If that didn't happen, the enzyme would be pretty useless, wouldn't it? So that's what this event is depicting here. Okay. So what happens if we have a reaction that we want to work out some equations that are specific to enzyme catalysis that help us to understand these reactions? The batteries in my laser pointer are dying, so I'm going to use it sparingly, but hopefully it'll get us through this lecture. Here's the mechanism I showed on the previous slide. Here's the rate of the reaction. All right. I'm just depicting here the rate at which P is formed. All right. I think you can see that it's a unimolecular reaction from the enzyme substrate complex with a rate constant K2. We're going to apply the steady state approximation again. All right. It's another example of that. And to do that, we set the time rate of change of the intermediate ES equal to zero. And so I think you can see there's a rate at which ES builds up and two ways at which ES is consumed. And so that equation we could have written down earlier because it's just the steady state approximation. And now we have to say some things that are specific to enzyme reactions. We're going to make some substitutions into this equation. And one of the substitutions we want to make is for the enzyme concentration because we don't know a priori what it is as a function of time. All right. Presumably the enzyme is going to get bound, form the enzyme substrate complex, and the free enzyme concentration is going to go down and the enzyme substrate concentration is going to go up. Presumably that's what has to happen. And so we don't know what that is, what E is, but we do know how much total enzyme we've got. All right. At least if we're studying this reaction in the laboratory, we added a certain amount of enzyme at the beginning of the experiment to study the enzyme kinetics that we're trying to study in the lab. Right now in a natural system, you know, if there are cells around and we've got some extract from a liver and some enzymatic chemistry is going on, we don't know anything. All right. We can't study enzyme kinetics under those conditions in general. Right. To study enzyme kinetics, we've got to take the enzyme control, the pH, put some substrate in contact with it and measure the reaction rate as a function of time somehow. We've got to do that. Okay. So presumably we know this E0, at least if we're doing the experiment in the lab, we know it. So the total enzyme can only exist in two forms. Free enzyme and enzyme substrate complex. And so now I can solve for the free enzyme in terms of the total concentration of enzyme. And of course, it's just equal to the total concentration minus the enzyme substrate complex. So now I can plug that into this expression. Boom. Here's my new steady-state approximation expression. All right. And when I distribute this K1 over these two terms, I'm now going to get four terms instead of three, one, two, three, four terms. And I can move the one generation term over to the left-hand side, put all the minus signs on the right side. All right. So all of that has got to be equal to that if this is equal to zero. And now I just solve this for ES. And that's easy to do because I've got ES, ES, ES. I just factor that out. And this is the expression that I get. Oh, I want to remember that this ES concentration that I'm using now refers to the steady-state enzyme substrate concentration. All right. We're assuming that the steady-state approximation is correct. So this is the equation that I get for that. And earlier we said that the rate at which P is formed is just equal to K2 times the enzyme substrate concentration. And so now if I just plug ES into here, I get this equation right here. Looks like a mess. Okay. But I'm going to divide the numerator and the denominator by K1. So K1 is going to go away here. And K1 is going to go away here. And K1 is going to end up in the denominator here. All right. So this is the new expression for the rate of the reaction that I get, still subject to the steady-state approximation. And now I'm going to roll all these constants up in column KM, K big M. All right. All of those guys together are going to be the McHale's constant. And this equation is the McHale's Menton equation. It is the most important equation in enzyme catalysis. Okay. But it's moderately useless. In other words, we've derived this equation. It's very important. It contains all of the kinetic rate information for enzymatic reactions. But we can't extract information from this equation very easily in the form that it's in. Ironically, it's the most important equation in enzyme kinetics. But we can't really use it. All right. We need to do some more work. What do we need to do? Well, first of all, let's think about this equation. See what it's telling us. Notice that in the denominator, there's an addition operation. What does that mean? There's going to be limiting cases. All right. If there's an addition operation in the expression for the rate, all right, there's going to be limiting cases. But why? Because one half of this addition operation could be large compared to the other half or vice versa. Okay. So we've got an addition operation in denominator. What are the corresponding limiting cases? Well, if K2 is big, remember K2 is part of KM. If K2 is big, how big? Big compared to K minus 1. And K2 over K1, if that's big compared to S, then this thing simplifies quite a bit. Notice if K2 is large, then K minus 1 can be neglected and S can be neglected. So I've just got K2 over K1 and K2 is going to cancel and so the rate of the reaction is going to simplify very substantially. Right, the rate of the reaction is just going to be K1 times the zero, the initial concentration of enzyme, times the substrate concentration. Should that always be the case because we're assuming steady state approximation fits? Yeah. Yes. Did everyone hear that? Shouldn't it always be true that K2 is large if the steady state approximation is correct? The answer is yes. But we're going to abuse this equation on a routine basis. All right, so this limiting case is not always going to be observed. It turns out it is useful and the reason is that we're going to apply it to the initial, in other words, we're going to measure a rate at the beginning of the enzyme substrate reaction using an initial concentration of substrate and an initial concentration of enzyme and under those conditions, this equation is going to work pretty well. Okay, so what does this mean? If K2 is big, all right, we've derived this simplified equation, but conceptually what does that mean? What does it mean if K2 is big? What it means is that the reaction doesn't even know ES exists, right? If K2 is big, as soon as ES is formed, boom, it reacts immediately, right? So its concentration is very low, which is good. That's in compliance with what the steady state approximation is assuming, right? The enzyme substrate concentration is going to be quasi-constant because it's very low. This also means that ES approximately equal to E0 because ES is approximately zero, right? And so essentially what this means is that the formation of the ES in this first reaction here is the rate limiting step in this sequence of reactions, all right? The rate at which the ES is formed because as soon as the ES is formed, boom, it reacts like a shot. All right, notice also that the reaction is first order and substrate, right? In this limit of high K2, the reaction is first order and substrate. We'll come back to that. What if S is big, all right? If S is big, then I can neglect this whole guy in the parentheses here, all right? And I just end up with this expression right here and S is going to cancel and so the reaction rate is just, in that case, just given by this expression here, it doesn't even depend on S. Not only that, it doesn't even depend on time, right? It only depends on the initial enzyme concentration, all right? In the limit of large S, large concentrations of substrate, the reaction rate is constant, right? You don't see any change in the rea- so conceptually what we expect to see is that low substrate concentration, well, we'll get to that, okay? What does this mean? What's happening? It means in this limit when S is big, all of the enzymes tied up as enzyme substrate complex, all right? We've got very high concentration of S. We've driven this reaction forward because it's first order in S and we've basically saturated all the enzymes. All the enzymes got a substrate on it, right? There's so much substrate around that this reaction has been driven so far to the right based on Le Chatelier's principle that we've tied up all of the enzyme as enzyme substrate complex. The enzyme substrate complex concentration is approximately equal to the total enzyme concentration, okay? This means that the reaction of S is rate limit. This second step is now rate limiting, okay? And that's what it looks like, all right? I'm using that rate constant for the total reaction rate. E0 is just equal to ES, all right? So in this limit, the reaction doesn't care about the substrate concentration. What if K minus 1 is big? Not very interesting. It happens. Reaction will be slow. Not an interesting limiting case, but one that could happen, right? K minus 1 could be large compared to K2 and then K minus 1 over 1 could be large compared to S. But if that's true, the equilibrium lies far towards the substrate. Equilibrium lies way over here, all right? And it's a bad enzyme. It's not a very good enzyme. The enzyme's not recognizing the substrate. Maybe the substrate is not the normal substrate for that enzyme. The reaction rate's just slow, okay? The substrate doesn't want to dock with the enzyme to form the enzyme substrate complex. Okay, what if S is small? What happens at low concentrations of substrate? Check it out, all right? Here's the Michaelis-Menten equation, all right? If I make S small, it disappears from the denominator. I end up with this expression right here. The reaction is first order in S. Okay? So we said when S is big, the reaction rate becomes constant. When S is slow, the reaction rate is first order in S. Okay? So we know what this reaction's going to do is a function of S. Right? It's going to be first order at low S. So this is a plot of the reaction rate versus the concentration of S. At small S, we see first order kinetics for S. In other words, the reaction rate goes up linearly as a function of the concentration of S. I think you can see this looks like a straight line down here, right? But as S gets larger, it starts to curve and at high concentrations of S, it becomes concentration independent. All right? And in that limit, there's a maximum reaction rate, V max, that's equal to K2 times the total concentration of enzyme that I started out with in this reaction. All right? So this is what any enzyme will do as a function of the substrate concentration. All right? That's what this equation predicts. All right? And it's also what's experimentally observed. Now, there's some other things that are indicated here that I'll explain to you. In the limit of large S, we obtain the maximum rate, V max, I already showed that to you. Right? Here's the V max. And we're not quite there yet. We would have to go to higher and higher and higher concentrations of substrate. But eventually, this reaction rate will asymptotically approach this dashed line which represents V max. That's the maximum reaction rate for the enzyme. That's a reaction rate that is characterized by the fact that the enzyme, every enzyme's got a substrate stuck on it. So at that point, the reaction can't go any faster. Okay. So K2 times theses of V max, yes. Okay. Now, there's lots of things in enzyme kinetics that are confusing, but here's one of them. All right? So let's get rid of one thing that's confusing here. V max over E0 is given by this expression right here. But we also call V max over E0 the turnover number. I think you could see V max over E0 is obviously equal to K2. All right? What are the units of K2 going to be? Seconds to the minus 1. Why? Because ES reacts to give products, right? So it's a unimolecular reaction. And so the rate constant is going to have units of 1 over seconds. Okay? And so that's okay. So the turnover number is going to have units of per second. Reactions per second is what it represents. Okay? So V max over E0, I mean, I just, so if we work out the units, this just shows that. V max is moles, molar per second if you will. That's the reaction rate. The concentration of enzyme is molar. So the units of this quotient here are 1 over seconds. We're going to call that K catalysis or K2 or the turnover number. All three of those things are the same. Right? If someone says the turnover number, they're just talking about K2. If someone says Kcat, they're just talking about K2. If someone says K2, they're just talking about the turnover number. Because they're all the same thing. They all have units of 1 over seconds. Isn't that confusing? Why have three names for the same thing? I don't know. Okay. Now, let's take the ratio between the reaction rate. Now I'm just going to call the reaction rate V. Remember, I was calling it DPDT? No difference. I'm going to just call it V, the velocity. Divide that by V max. Remember, V max is just K2 times Z0. If I divide these two things, obviously that's going to cancel. And so I'm just left with S over S plus KM. And if I take the reciprocal of that, I get this. And if I divide by V max and cancel these S's over here, I get this, which is the same as this. So all we've done is to take the Michaelis-Menten equation and do some more algebra on it to get an equation that is not called the Lineweaver-Berke equation. It should be called the Lineweaver-Berke equation. But it's not. But if you make a plot using this equation, it is called the Lineweaver-Berke plot. How are you going to make a plot? Take one over S, that's going to be your vertical axis. No. That's supposed to be X. That's supposed to be, sorry. That's the vertical axis, horizontal axis rather. That's such a typo. This should be Y. One over the velocity is the vertical axis. The slope is this, the intercept is that. I think I've used this slide for three years, and I've never noticed that. All right. To be clear, we're talking about the initial substrate concentration and the initial velocity. All right. So you take your enzyme solution in your pH-controlled buffer. All right. You add some substrate and you measure, usually spectroscopically, how fast the products are formed as a function of time. And you extrapolate to zero time. All right. And you measure the reaction rate at time zero. Of course, you're measuring initially the reaction rate over a range of times, but you extrapolate to zero to get the initial rate. We call that V zero. All right. And it applies, of course, to the initial substrate concentration. That's usually what we're plotting in a line weaver-berk plot. Here's what it looks like. All right. One over S, horizontal axis. One over V, vertical axis. Notice it says V zero. That means the initial velocity. All right. This should really say S zero. That's the initial substrate concentration. All right. We should get a straight line from that. The slope of the line is going to be Km over V max. The intercept on the y-axis is going to be one over V max and the intercept on the x-axis is going to be minus one over Km. Really? Yes? Check it out. If I take one over V and I make it zero and I solve for what one over S is going to be at one over V max equals zero, I get minus one over Km. All right. From the not line weaver-berk equation. Okay? So this is an extremely useful equation. The Micaela's-Menten equation, no, not so much. Not useful except that I can derive the not line weaver-berk equation from it and that's really useful. Here's a problem from last year's final exam. All right. The following results are obtained for the oxygenity, sub-strate concentration, reaction rate. This should say V S zero. This should say V zero. Which substrate concentration is it? Is it constant during this whole reaction? Not necessarily. It's going to be consumed as a function of time. Is the reaction rate going to be constant as a function of time during this reaction? Not necessarily. All right. We're really talking about the initial substrate concentration and the initial velocity. Here's some data. Tell me about this enzyme. I want to know everything about its kinetic behavior. All right. Well, we can get everything. In other words, we can get KM and we can get E zero. All right. And we can get V max by making a line weaver-berk plot. So to make a line weaver-berk plot, we don't want S. We want one over S. We don't want V. We want one over V. I want a plot, one over V versus one over S. Here's a piece of graph paper that you're going to have on your answer sheet. All right. Here are the data points you're going to plot. You're going to draw this axis because you know you don't want it here. All right. You're making a line weaver-berk plot. You want it in the middle somewhere. Because you want to extrapolate this thing to the one over V max equals zero. This dash line here is meant to represent the maximum possible errors that any student could possibly make in plotting these data points. So we can grade it. Okay. Now what's helpful to you is to have a straight edge. So that you can plot these data points here and then draw a straight line through them. All right. So maybe you can use the edge of your notebook. But if I was you, I would buy one of those cheap plastic rulers that are about that long. That's very handy for this purpose. A straight edge. Okay. So now that I've got this, I can determine V max, Km, Kcat, all of these things, everything that you can possibly determine in terms of the observational kinetics of this enzyme we can extract from this line weaver-berk plot. Okay. Getting the right answer starts with getting the right line weaver-berk plot and the right intercepts and the right slope. Okay. You with me so far? This is straight out of Chapter 21. Now, how many people have heard everything I've said so far in another class? All right. Now, what could possibly mess this up? Well, an inhibitor. All right. A molecule that inhibits this reaction could mess it up. All right. And inhibitors are extremely important in enzyme kinetics. And this is the beauty of the line weaver-berk plot. All right. How the line weaver-berk plot changes with and without inhibitor is diagnostic of how the inhibitor is acting on the enzyme. It's a thing of beauty. All right. You can measure the line weaver-berk plot without the inhibitor. Add the inhibitor and look what happens to the line weaver-berk plot. Add a little more inhibitor. Add a little more inhibitor. All right. Of course, you have to measure it as the initial velocity of the reaction at a series of substrate concentrations. All right. For added inhibitor. All right. But once you've done that, you can determine exactly what type of enzyme inhibition is occurring. There's three possibilities. The inhibition could be competitive, non-competitive or uncompetitive. Is this another example of confusing enzyme terminology? Yes. Non and on are not the same. All right. I don't know who came up with this stuff. Okay. So what's the difference? It's very simple. In competitive enzyme inhibition, the inhibitor is binding at the same active site that the substrate wants to bind to. All right. If the inhibitor is in there, the substrate can't bind. It's blocking it. All right. That's competitive inhibition. The word perfectly matches what's happening. In non-competitive inhibition, the inhibitor is binding away from the active site and it can bind either to the free enzyme or to the enzyme substrate complex. But it's binding away from the active site. It's not binding at the active site. So the inhibition is not competitive. And finally, in uncompetitive inhibition, the inhibitor is binding away from the active site but only at the enzyme substrate complex. All right. So there's a subtle difference. All right. Non-competitive and uncompetitive. In non-competitive, the inhibitor combined either to the free enzyme or the enzyme substrate complex. Non-competitive. In uncompetitive, it can only bind to the enzyme substrate complex. You see the difference? Free enzyme or enzyme substrate complex. That's non-enzyme substrate complex only. That's un. Good luck. It's hard to remember. All right. Here's a cartoon. Here's the inhibitor. Here's the substrate. The inhibitor is blocking the active site. That's this piece of the pie here. All right. Once the inhibitor is in there, the substrate can't bind. Obviously, the reaction slows down. But in a very characteristic way. All right. It slows down. All right. And so here's what the Michaelis-Menten data looks like. All right. Initial velocity, initial substrate concentration. All right. Here's what happens in the absence of the inhibitor. Now I add the inhibitor, the whole curve shifts to the right. But if I make the substrate concentration high enough, I get the same V max. What's happening? I'm out competing the inhibitor. I've got a certain inhibitor concentration. And if I add enough substrate, eventually, this guy wins over this guy. All of the enzymes get tied up by substrate. If there's 1,000 substrates for every inhibitor. All right. I think you'll agree the substrate is going to start to win. And in that limit, I get the same V max. All right. The enzyme is maxing out during the reaction that I care about. The inhibitor is having a relatively small effect on that. Okay. So if it's a competitive inhibition situation, you can always out-compete the inhibitor by making the substrate concentration enormous. In that limit, you get the same V max. What does that mean? Oh. So let me point out that in competitive inhibition, you see how this is half V max here, this dash line right here. What happens at half V max? Why is that interesting? Well, it turns out at half V max, if you do the math, Km is equal to the substrate concentration. I put it on this slide so you can check it out later on. All right. Here's the substrate concentration. That's equal to Km at V max over 2. All right. So if you're just looking at your raw data and you haven't made a line weaver berk plot yet, you can look at this plot right here. You can go to half V max. That's equal to Km right there. And so you can see the effect of the inhibitor is to increase Km. All right. But it doesn't affect V max. What influence does that have on our line weaver berk plot? This is 1 over V max. It does not change for competitive inhibition. So if I add in here's no inhibition. Now I add inhibitor. Now I add some more. Now I add some more. All right. I get a series of straight lines that have the same intercept but different Km's. There's Km. 1 over Km is down here. Minus 1 over Km. 1 over V max. All right. And I can look at this line weaver berk plot and say, boom, it's competitive inhibition, obviously. All right. Same intercept, different slope. And the slope is changing in this. It's getting, the slope is getting larger as I increase the inhibitor concentration. What about non-competitive? Well in this limit, here's the inhibitor. It's binding away from the active site now. You see that? It's binding, it can bind in this case what I'm showing here is binding to free enzyme but can also bind to the enzyme substrate complex. All right. If it's non-competitive inhibition. And in that limit what happens is Km is unaffected but V max gets reduced. All right. So this is 1 over V max. Remember? This intercept is 1 over V max. And so as I increase the inhibitor concentration Km doesn't change but the intercept changes and the slope gets bigger. I can look at this and I can easily tell the difference between these two cases right here. All right. This guy is preserving this intercept. This guy is preserving this intercept. All right. So the line weaver berk plot allows us to diagnose immediately which of these two cases is operating. I mean I know when I add inhibitor the reaction is slowing down. I can see that. All right. But I want to understand where the inhibitor, how the inhibitor is acting on the enzyme. Is it plugging up the active site? Is it binding away from the active site? Is it doing that for the enzyme substrate complex only? Or for the free enzyme as well? All right. I can tell all that just by doing a few experiments. Finally, uncompetitive amazingly gives us a third case that we can easily tell the difference. In noncompetitive this slope which is Km over Vmax turns out that slope is preserved. And I get a different intercept for 1 over Vmax, a different intercept for 1 over Km. But the ratio stays the same. Isn't that amazing? So I can classify. If I am willing to do the work because there's a non-trivial amount of work here, I mean you've got to do four or five experiments to map out every one of these lines with different initial substrate concentrations, different initial enzyme concentrations to get different initial velocities. Now if you buy this book which is the Bible of Biochemistry, even it does not contain what I'm about to show you which is the mathematical derivation of where these straight lines come from. It's not in your book. It's not even in Leninger. It's value added. But you can't get it very easily anywhere else. All right, are you ready for this? Because I'm not going to laboriously do all this math. I'm just going to click through these slides and show you. Because it's a thing of beauty. Here's the mechanism that we've been talking about so far. All right, reversible formation of the enzyme substrate complex, reaction of the enzyme substrate complex to give products. All right, now we're going to add two other possibilities. The enzyme can reversibly form a complex with the inhibitor. That's I. All right, and the enzyme substrate complex can form a complex with the inhibitor as well. All right, so there's two new possibilities that we have to consider. These two new possibilities encompass all of the things that can happen with all three types of enzyme inhibition, competitive, non-competitive, and uncompetitive. Okay, and all we do is we write a conservation of mass equation. The total enzyme that I'm putting in my beaker has to exist in one of four states, free enzyme, enzyme substrate complex, enzyme inhibitor complex, enzyme substrate inhibitor complex. There's only four possibilities, right? Now I do the math on this, and this is what that looks like. I have to create some definitions. I'm going to define rather something called alpha, which is one plus the inhibitor concentration over big K1. What's big K1? That guy, right, the equilibrium constant for the inhibitor interacting with free enzyme. That equilibrium constant. What's K1 prime? That's the equilibrium constant for the enzyme substrate complex. That's going to be A prime, one plus that over, okay, and so I do math, and I do a little algebra, and we can write equations. And it doesn't take us that long before we get to your equation 21.8B. Here's the non-Liever-Berke equation. Here's the non-Liever-Berke equation for enzyme inhibitors. Check this out. I've got an alpha prime instead of a 1. I've got a K times the Michaelis meton constant instead of KM. So I've just inserted alpha prime here and alpha here, and now I've got an equation that is completely general for all types of enzyme inhibition. All right, this equation explains those three behaviors that we just talked about. How does it do that? First of all, notice that if K alpha, sorry, if alpha prime is 1 and alpha is 1, then the inhibitor is exerting no effect. I just get the non-Liever-Berke equation. All right, that's this case. Alpha equals 1, alpha prime equals 1, zilch, nothing happens. All right, for competitive inhibition, alpha is greater than 1, alpha prime equals 1. Vmax does not change, but KM gets bigger. This table is a thing of beauty. It's also not found in Leninger. Non-competitive inhibition inhibitor binds to both ES and E away from the active site. Yes, yes, yes, both alpha and alpha prime are greater than 1. Vmax gets smaller, KM doesn't change. That's what that actually means, what these two alpha means. And finally, uncompetitive inhibition, alpha doesn't change, but alpha prime is greater than 1. Both of these two things are smaller, but the ratio is preserved. So the slope stays the same for noncompetitive inhibition. These are those three cases I just showed you. I just put the slide in here so you can look at it next to this. Yes. So one thing that should be obvious is that for competitive inhibition, right, the inhibitor looks like the substrate. If it doesn't, it's not going to inhibit the reaction. This is the classic competitive inhibition reaction. Suxonate dehydrogenase, dehydrogenating succinate. All right, this is the mitochondrial membrane. Here's what that enzyme looks like. It's got a transmembrane component that plugs into the mitochondrial membrane. This is the lipid bilayer. All right, we've got these alpha helical peptide regions that are very hydrophobic that like to insert into the lipid bilayer. All right, and then we've got this more globular, extracellular part. Here's the reaction. Here's the succinate. That's this guy. All right, here's the fumarate. That's the product of the reaction. All right, that's the succinate after it gets dehydrogenated. Succinate dehydrogenase, succinate dehydrogenase will form fumarate, form this double bond. All right, the classic competitive inhibitor is melonic acid. Look at this, and look at this. All right, if I rotate about that single bond, I think you're going to agree that this looks a lot like that. There's an extra carbon here. All right, but melonic acid inhibits succinyl dehydrogenase. Succinate dehydrogenase inhibits the heck out of it. All right, why? Because the active site gobbles this stuff up. It looks just like this. It's very similar in size. Okay, did I finish early? Are there any questions about this? Because on Wednesday, we're going to talk about something else. Okay, look at Chapter 21, and then see if you have questions about this stuff.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 25. Thermodynamics and Chemical Dynamics -- Enzymes Pt. II -- Instructor: Reginald Penner, Ph.D. Description: In Chemistry 131C, students will study how to calculate macroscopic chemical properties of systems. This course will build on the microscopic understanding (Chemical Physics) to reinforce and expand your understanding of the basic thermo-chemistry concepts from General Chemistry (Physical Chemistry.) We then go on to study how chemical reaction rates are measured and calculated from molecular properties. Topics covered include: Energy, entropy, and the thermodynamic potentials; Chemical equilibrium; and Chemical kinetics. Index of Topics: 0:00:06 Enzymes 0:12:36 The Michaelis-Menten Equation 0:20:27 Michaelis-Menten Kinetics 0:24:30 Ratio Between V and Vmax 0:25:34 Lineweaver-Burk Plot 0:32:56 Classifying Inhibitors
10.5446/18962 (DOI)
Okay, are you guys doing? Two more lectures. We're going to review the final for the final on Friday. So that's all we'll really be doing on Friday. I'll tell you in some detail what's going to be on it. Did I mention that the electronic course evaluations are available? I didn't look to see if any of you guys did these in the last two days. Monday we talked about enzyme kinetics, but we're not going to say anything more about that today, but as you'll see on Friday, that's definitely going to be on the final. So I jammed a lot of information into that lecture on Monday. All of enzyme kinetics, all of enzyme inhibition, it's all in there. All right, some of that stuff is not in your book, like the enzyme inhibition stuff. Okay, so you can either find another source to study that from or you can study it directly from the lecture. I think everything that I put in there is approximately correct. Now, today we're going to talk about transition state theory. At the beginning of the quarter, this is what we hope the course would look like. We had this beautiful blue rectangle of reaction dynamics here at the end of the course, and we were optimistic that we'd be able to talk a lot about this, maybe for more than one lecture. But what actually happened was it fell off the end of the course, and all we've got left is a single lecture that you're going to hear about today. That's all that's left of reaction dynamics. Could we have just left it out? We could have, but enough damage has been done to you guys. In 131A, all you heard about was quantum mechanics from Professor Renzepis. In 131B, all you heard about was spectroscopy from Professor Martin. And I know you know a lot about quantum mechanics and spectroscopy, and that's good. But everything else is important too, and so this subject is actually very, very important to us. And so I've tried to pick out the one thing that I cannot allow you to leave this class without knowing. I've tried to pull the one thing out of reaction dynamics that you've got to know about. This is the one thing, this lecture. I'm going to tell it to you. Basically what we want to understand is where does the Arrhenius equation come from when we measure the temperature dependence of reactions? This is the reaction rate. Most of the time we find out that it conforms to this equation. We make a plot of log of the reaction rate versus 1 over T. We get a straight line. We get the activation energy of the reaction from that straight line, and we report that, and we think about that. That guides a lot of what we do as physical chemists, but we have never explained where this equation comes from. In some ways this is the most important equation in chemical kinetics, but we haven't talked about what its origin is. Fundamentally, where does this equation come from? So we're going to do that today. I can do that in one lecture. It reminds me, of course this statement is grammatically incorrect. It's a dangling preposition. It reminds me of one of my favorite jokes. The Texas student goes to Harvard, and he asks a Harvard student, where are you from? The Harvard graduate says, well, I come from a place where we do not end our sentences of propositions. The Texas says, okay, where are you from? Other words are sometimes substitute. I would love to spend 10 minutes talking to you about these two guys. Henry Ahring worked at the University of Utah for most of his career, one of the great American physical chemists of the 20th century. Michael Polany, Nobel Prize winning physical chemist. Both of these guys are responsible for what we're going to be talking about today, transition state theory. They worked it out in the 1930s. So here's the short version of the history that matters here. Way back in 1916, G.N. Lewis, the first great American physical chemist, worked out Lewis-Dott structures. There was no quantum mechanics in 1916. It wasn't invented until 1924. He came up with a very compelling model for chemical bonding involving pairs of electrons that turned out to be correct. And much of what it predicted way before the fundamental underpinnings of quantum mechanics were described. He figured all that out. He was a genius. Heisenberg, Dirac, and Schrodinger discovered quantum mechanics in 1924. Much of Germans, Heitler and London, rather, took quantum mechanics and applied it to bonding for the first time. They did the first quantitative description of chemical bonding. Their names are usually lost. We hardly even mention it. Sometimes we talk about London's dispersion forces. You guys remember that? Vanderwall's forces? This is the guy. Fritz London. He's the guy who figured that out. 1929, five years after quantum mechanics, these guys developed a quantum mechanical way to think about reaction rates. These guys figured out a quantum mechanical way to think about bonding. That's pretty important. But Pellanian Erring figured out how to think about reaction rates from a quantum mechanical perspective. We're going to zoom through what they taught us today. Here's the basic idea. A plus B goes to products. We're just going to talk about this generic reaction today. We've stripped down this lecture so that we've taken everything else out. It should be three weeks' worth of lectures. We know the rate law is this. The rate constant times the concentration of A times the concentration of B. These can be pressures or concentrations. Transition state theory says that this reaction actually occurs through this mechanism. A plus B are in equilibrium with something called AB double dagger. That reacts in a unimolecular fashion to give us products. Transition state theory basically takes all of the reactants, puts them on one side, all of the products puts them on the other side, and in between it constructs something called an activated complex or a transition state. That's why it's called transition state theory. This thing here is the transition state. It's an entity that is intermediate between the reactants and the products. If bonds exist in the products that don't form in the reactants, they exist partially in a weak form in the activated complex or in the transition state. If bonds are broken here, they're partially broken in the transition state. The transition state has structural attributes, bond lengths, and so on that are intermediate between the reactants and the products. One of the key points about transition state theory is it postulates that an equilibrium exists between this transition state and these reactants. Here's the picture in terms of energy that you've seen a million times, but what we're presenting here now, the notation applies specifically to transition state theory. You've got reactants that's represented here by these two molecules. This looks like it could be an OH, that's CH3, that's CH3Br apparently, OH minus, CH3Br. That's the transition state and here are the products. Notice that in this reaction, a new bond is formed between this oxygen and this carbon and this bond here between this carbon and this bromine is broken. In the transition state, notice that the transition state, what we're depicting here is this bond is partially formed. You see how long it is? It's a long, weak bond and that bond is partially broken. See how long it is? It's a long, weak bond also. That bond is going to be broken in the product state. That bond doesn't even exist in the reactant state. The transition state contains all of the bonds that are present in the reactants and in the products, but the bonds that are involved in the formation of the products from the reactants are weakened. See how long that is? See how long that bond is? These are two super weak bonds. We can construct a transition state by thinking about what the products look like, what the reactants look like, products, reactants and then thinking about how the products are formed from the reactants geometrically. How does that happen? This shows an attack in a particular geometric orientation of the OH minus to the CH3Br. Thinking about this and thinking about this, we can construct what this transition state should look like using a few simple rules. It turns out that transition state and activated complex should not technically be used interchangeably. It's a nuance. The activated complex actually exists over a range of this reaction coordinate, whereas the transition state in principle exists only at a particular point in time. We're not going to bother ourselves with that distinction today. There's a fine point there that you should know. These are not identically or interchangeable to say activated complex and transition state, but today we're just going to use those two terms interchangeably. Now, with that as a premise, let's see if we can work out what the reaction rate is. Here's our mechanism, our transition state theory mechanism for this reaction. Yes, so we can write an expression for this equilibrium constant. What is it? Product over reactants. Notice that I'm normalizing and being very careful here to write activities, the activity of the transition state, the activity of A, the activity of B, because I'm dividing by the standard concentration here. When we're done canceling these C0s, I end up with an extra factor of C0 on the numerator that ensures that the equilibrium constant is dimensionless, doesn't it? Now, unfortunately, I used the Microsoft word equation editor to write a lot of these equations. It does not allow me to write a zero with a line through it. That is the symbol for the standard concentration, one molar, for example, but I'm just going to call it C0. There's another issue. See this double X here, this double plus? Microsoft equation editor doesn't contain a diocese. What's a diocese? It's that thing, the double dagger. The double dagger is the same thing as the diocese. When I write two pluses, I'm just indicating the transition state. That refers to the transition state. That's the equilibrium constant that involves the transition state. That's the unimolecular reaction rate constant that involves the transition state. You see how? I'm going to write that as that. If you see the double plus, it's just the diocese. If anybody knows, I'll work around for that. That would help me out a lot. How do I write a diocese in Microsoft equation editor? Here I cheated. I put a white square here and I pasted this on top. Then I thought, I can't do that. There's like 106 slides in this presentation. Now I said yes, yes. We could write this in terms of pressures if we want to. Here's the concentration. Pressures, no difference. Now, if we flash back, we're talking right now about chapter 20. If we flash back to chapter 17, it turns out we can calculate equilibrium constants from partition functions. Recall, partition functions are very important to us because they allow us to make a connection between statistical mechanics and thermodynamics. Partition functions contain information about the actual molecule. We can look at a molecule. If we know something about its state distribution, we can write its partition function. What we learned in chapter 17, what we didn't have time to talk about in the class this quarter is the fact that you can also use these partition functions to calculate equilibrium constants. If you want to read more about that, it's on page 670. Turns out to be an important thing that we left out. Here's what's there. Here's some generic reaction. Amoles of A, binomoles of B, c-moles of C, d-moles of D. Here's what the equilibrium constant expression looks like written out in terms of the standard molar partition functions for A, B, C and D. If I can calculate these partition functions for all of these guys, and I know that, so what is this? This is the standard molar partition function. M means molar, A is species A. What's that? That's Avogadro's number. That's not a misprint. Avogadro's number in every single case. That is the difference in the zero point energies between reactants and products. If this is your generic reactant here, here are vibrational energy levels. Here are vibrational energy levels of the product. That's the ground vibrational energy level. That's the ground vibrational energy level. That's delta Re0. The difference between the ground and the zero point energies of the reactants and the products. Here's our Gibbs free energy as a function of reaction coordinate. The delta Re0 is closely related to this green quantity that I'm indicating here. If this thing was in its ground vibrational energy level, and this thing was in its ground vibrational energy level, both of these guys, and both of these guys, then this would be delta Re0. We're always talking about the zero point energies. We could say something about this equilibrium constant. We could calculate it using this equation right here. In transition state theory, that's not the equilibrium constant we care about. What we say in transition state theory is this is an equilibrium with this. I don't care about the equilibrium of this with this. That's going to give me the normal equilibrium constant that I can learn about in chapter 17. What transition state theory says is these two guys are in equilibrium with one another. What matters is this delta Re0 here. This thing that we normally call the activation energy. It's the activation energy from the Erranius equation. We want to calculate that imaginary equilibrium constant. These things may actually be in equilibrium, but this thing is not really observable except using some exotic spectroscopic nanosecond picosecond spectroscopy. A normal equilibrium applied to this generic reaction right here, I could calculate the equilibrium constant using this equation. Now I'm going to apply that same thinking to the transition state theory. Here's the transition state equilibrium we care about. A reacts with B to give this transition state. Now what do I want? I want to put the partition function for that guy in the numerator for these guys in the denominator and then that's Avogadro's number. That's just left over from, that's an extra factor of Avogadro's number because I've got two reactants in one product. I can calculate this equilibrium constant that applies to the formation of the transition state. Keep in mind, everything we're talking about here is kinetics. We're sort of mixing thermodynamics with kinetics. They took a thermodynamic concept, equilibrium constant, and applied that to transition state theory. So you're with me so far? We've got an equilibrium constant here. We can calculate it using statistical mechanics. Two-step mechanism. That's the first thing to understand. The second thing to understand is that this rate right here showing in gray, that's going to be approximately equal to the frequency with which the transition state crosses over the top of the barrier. We've got a barrier here. We've got a transition state. We are moving along this reaction coordinate from reactants to products. The frequency with which this thing moves across the top of this barrier, that's going to closely approximate this rate. The rate at which products are formed, that seems to be a statement of the obvious. Obviously as you cross over this barrier from reactants to products, the frequency that that happens, that's the rate of this reaction. It's totally obvious to say that. The reaction rate I can write as this frequency times whatever the concentration of this transition state is. We're going to talk about this frequency. It's the frequency with which if you think about this as being a molecule, there's a vibration that has to happen here. This guy moves back and forth between these two guys. We get a symmetric vibration of this transition state. The frequency that characterizes that mode is the frequency that we care about. Your book includes something called the transmission coefficient. We're just going to assume that's one. See that cap right there? Just forget about it, it's one. The rate of the reaction is given by this special frequency that applies to the reaction coordinate times the concentration of the transition state, but we know we can also write the rate in the normal way with a rate constant times A times B. That's what we've been saying all along. Here was our equilibrium constant expression for the transition state. If I just solve for AB in this expression right here, I get this and I can plug that in to AB there. I've just solved for the concentration of the transition state from here. Now I'm going to plug that in to this equation right here. I'm going to put all of this into here and so there it is. There's the frequency, the special frequency, here's all the rest of that. That's our reaction rate. In essence, this is the phenomenological rate expression that we would normally write for this reaction. A plus B goes to products. We know that the reaction rate is K times A times B as long as that's an elementary reaction. What we've said is look, that rate constant is given by this expression right here. That frequency times that equilibrium constant divided by this concentration term just to keep the units right. Transition state theory has already, and the important thing is these two parameters here relate directly to physical parameters of the transition state that we can think about calculating. In other words, we can calculate this rate constant from fundamental properties of the transition state because we know enough physical chemistry to do that already. We have the key point is we have expressions for this equilibrium constant and for this rate constant right here. We said that's just equal to V, this guy's equal to C0 times K double dagger divided by V. Let's say that we actually do want to calculate now what the rate constant is. Let's say that we actually want to calculate that K right there. We have to be able to calculate big K double dagger and we've got to be able to calculate little K double dagger which is just equal to V. How are we going to do that? Well, here's the expression for big K double dagger. We know what the partition function of A and B are. We know how to calculate that already. We've done that. How do we calculate the partition function of the transition state? How do we calculate that partition function? I'm just confused on your K double dagger expression. That's the same as that. That's the same as that. Well that's a little confusing, isn't it? What the heck did I? That's supposed to be K. Sorry, that shouldn't be K double dagger. That should just be K. Sorry, you're right. Those double daggers shouldn't be there. Thank you. How do we calculate that transition state? How do we calculate that partition function? It's a transition state for goodness sakes. What is it? This is the question that these guys wrestled with when they worked out transition state. If we think about the vibrational partition function first, it's the vibrational partition function that we really care about here. Essentially the transition state is undergoing a vibration. How do I think about that? This bond is getting longer. This bond is getting shorter. Just like an asymmetric stretch of the transition state, that's the mode that we care about. If we could calculate the partition function for that vibrational mode, that's critical to understanding what the reaction rate is going to be. Here's the generic expression for the vibrational partition function. For some mode that has a natural frequency or natural energy h nu and a natural frequency nu. What can we say about this magic mode that we care about, this asymmetric vibration along this axis? Is that going to be a high frequency mode, a low frequency mode? What do you guys think? High frequency? Hello? So tell me, these are weak bonds here, right? Do weak bonds have high frequencies or low frequencies? Low, thank you. We're going to expect this to be a weak, we're going to expect this to be a very low frequency mode. We call these soft modes. Here's a picture from your chapter that I think is somewhat non-intuitive. Here's the transition state up here. What this picture is trying to convey is that the transition state has a very shallow vibrational energy well with vibrational energy levels that are very, very close together in energy. In other words, the frequency to go from here to here, that h nu is tiny. Yes, but the key point is that the frequency that characterizes these transitions in the transition state along this direction right here is very small. What that allows us to do is simplify this equation. We can write this exponential as a series and we can truncate it at the first term. When we do that, we just get kT over h nu. We take this normal expression for the partition function, we truncate it, we write it as a series expansion, the exponential and we truncate it at the first term and we just get kT over h nu. This is a special nu. This is the frequency that describes motion over the barrier. Whatever that is for, whatever the transition state is, whatever mode the product bonds are getting formed, reactant bonds are getting consumed, we can think about that process as involving a single vibration that has a very low frequency in general. That's what they realize. Conceptually this is not obvious. I think everybody in this room would agree. None of this stuff that I'm talking about in the last five minutes is obvious. This was the conceptual leap that Polani made. I can write the whole partition function for the transition state as the partition function for this low frequency mode and then the whole rest of the partition function. Notice that there's going to be a rotational, an electronic, a translational. The partition function contains many other manifolds and other vibrational modes as well for the molecule that are orthogonal to this special mode. We're going to roll that all into this guy over here. This is just the partition function for the magic mode that corresponds to the reaction. This is the rest of Q, all the rest of the partition function for all the other modes, manifolds, and so on. So that's the partition function. I can just plug that in then. Here's the partition function that we were wondering about. Here's the expression we now have for it. Notice that the rest of this partition function involves things that we can already calculate because they're things that are not perturbed in the molecule. What's perturbed in the transition state is product bonds are getting formed, reactant bonds are getting broken. We can describe that. We can roll that process all into one mode that has this characteristic frequency here. We're going to have to figure out what that frequency is. But all the rest of this stuff, we can just roll in. Bonds that are orthogonal to the reaction rate, we can just calculate their vibrational partition function, their rotational partition function, all of that stuff using the conventional methods that we already know about. So now I rewrite this in terms of this guy right here. Look at this KT over H nu. Now that's right there. Here's the rest of the partition function, this Q with a line over it here. And I'm getting close to being able to calculate this guy. This is the contribution along the reaction coordinate only. Yes, yes, this is all the rest of it. Yes. Okay, so our expression for the rate constant becomes this and we're just plugging now this expression in for this equilibrium constant with a double dagger on it. So we can actually, when we do that, we can cancel this frequency. Hey, turns out we don't need to know what it is. It cancels for gosh sakes. We don't have to measure it. We get this expression right here and this is the famous Erring equation. We derived it going way too fast in about 20 minutes. This is a very important equation. Now why is it so important? Yeah? How do we find K using K? How do we find K using K? How do we find that K using that K? No, the little K within the equation. Oh, that K. This could be confusing. That's Boltzmann's constant. Sorry. This is the rate constant, phenomenological rate constant for the reaction. That is the equilibrium constant that we calculate using this expression right here. Right there, that whole thing. Now you might ask, are you going to be called upon on the final exam to calculate all of this stuff? No. But I have to be able to sleep at night and so I am going to disclose all of this information to you even though I don't think it would be fair of me to write a question where you have to calculate all of it. If you look at the end of this lecture, when I post this lecture after class, it does in fact have like 120 slides and the last 20 slides are a calculation. It allows you to calculate the rate constant for H plus H2 goes to H2 plus H. The very first reaction that was studied using this equation, we can work out exactly what the rate constant is and if we had enough time we would do it but we don't. So if you're interested in this rate theory stuff, this transition state theory stuff, you might want to just look down at those slides because we're never going to get to them today. It doesn't matter. It does matter but. So here's the Erring equation. I'm just substituting now for K double dagger here. Here's the Erring equation. Do you see the parallels? The activation energy is this delta E0 that we were talking about. The difference in the zero point energies for reactants and products. This pre-exponential factor, A that we've never said anything about, that's given by this collection of variables right here. RT over H times this guy, what's left over from the partition function, we strip out from the normal partition function for the transition state, we strip out the part of it that pertains to the vibration along the reaction coordinate and that's what's left over because remember that frequency just canceled for us. We don't need to know it. That's a good thing because who knows what it is. How would you measure it? You'd have to have some exotic spectroscopic method to do that. Well the Erranius equation is a special case of the Erring equation. We can calculate these things and if you don't believe me go to slide 101 and we go through and we do it laboriously for a particular reaction. We're not going to say more about it sadly. Here are pre-exponential factors. You can calculate this one, actually it is calculated later in this lecture. We can calculate these pre-exponential factors for the Erranius equation knowing transition state theory so it's very powerful. What can we do in the last 15 minutes? Something very important. We want to apply the same thinking to a reaction that occurs in water. The key point is that we want to think about what the influence of charge is on the reaction rate. It's not inconceivable that there could be a question like this on the final. For example two chlorides react with lead to give lead chloride. All of these things can be in solution. Lead chloride has some solubility in water, very low. What rate would this reaction take? What is the influence of the charge on the reaction rate? You might think naively if the reactants and products are oppositely charged they're going to be clumpically attracted to one another and the reaction rate is going to be accelerated. If you don't think about this too hard, even if you think about it hard you might come to that conclusion. Negatively charged reactants and positively charged reactants are going to be clumpically attracted and boom they're going to react really fast. That's going to accelerate the reaction rate. That turns out to be exactly wrong and it's important to understand why. It's great once in a while in chemistry when you encounter a concept that is completely counterintuitive at first and later on hopefully intuitive. I've got to make this happen in like 12 minutes. Here's the idea. A reacts with B to give products. That's the charge on A and B, Z, A and ZB. That's the charge. The key point here is reactants are charged. There's a version of transition state theory but it's not strictly speaking transition state theory. It's a thermodynamic version of it. You can't use transition state theory in its normal form to describe reactions in solution because it doesn't account for reactions with solvent. It doesn't transition state theory in its normal form does not account for the complexities imparted by having the solvent present in the reaction. This looks like transition state theory but strictly speaking it's something that's related to it, not exactly the same thing. We're going to simplify this notation. We're going to call this mess. Notice something. If we think about this in terms of transition state theory, when A reacts with B, if we form a transition state, the transition state will have a total charge equal to A plus B. The total charge of the transition state will be the sum of the charges on A and B because charge has to be conserved here. That's a key point. So exactly, this is the reaction rate. Now in terms of, this is just that rate constant right there. This is the normal phenomenological rate that we would write for this reaction. Then we have to think back and remember something about activities. Activities are going to be the key, the understanding how this works. The activity of some I in A is equal to its concentration times some activity coefficient, gamma sub A. That's the activity of A. That's the activity coefficient. The activity coefficient for a neutral is one. The activity coefficient only deviates from one as a consequence of Coulombic interactions with the solution, other ions in the solution. If other ions are present in the solution, the activity coefficient will be less than one. The more ions, the lower the activity coefficient. And there's something called the Debye-Hockel limiting lot. That's the activity coefficient. That's the ionic strength and the ionic strength is just given by this expression where this is the concentration of the ion, sorry, this is the concentration of the ion and this is the charge on the ion for every ion in the solution. I add up all the ions in the solution, multiply by the square of their charge, take that times one half and I've got the ionic strength. The higher the ionic strength, it's never obvious to me when the equation has a log in it like this. The higher the ionic strength, the lower gamma becomes. If I is zero, gamma is one. This is the ionic strength on this axis. This is gamma on this axis. If ionic strength is zero, gamma is one and it deviates from one as the ionic strength goes up. And that size of this deviation here depends on what the charge is on the ion of interest, ion A. Because here's its charge. So if it has a charge of one, there's a small deviation, charge of, that's the size. If it's small, you get a little bigger deviation. If the charge goes up to three, you get a huge deviation. So activity effects have everything to do with charge. How many of you had 151? From me. So this is review for all you guys. Good. So when we write an equal, here's an equilibrium constant. We write in terms of activities. Every activity is an activity coefficient times a concentration. You guys all know that. And what's more important is you have intuition about what the influence of ions are on equilibria. Here's an important piece of intuition that everybody should have in the room, especially those of you who took my class. If you look at some equilibrium like this, all right? Sodium chloride is an acid. Acetic acid dissociates to give hydronium ion and acetate. If I dump sodium chloride into the solution, what's going to happen to the pH? Sodium chloride is an inert salt. Sodium chloride has no acidity or basicity of its own. And yet the pH will change in a predictable way. What will happen? How many people think the pH is going to go down? It's going to get more acidic. How many people think the pH is going to go up? It's going to get more basic when I add salt. Get those hands way up there because I want to see. You guys never had 151 from me. Now if you add salt to this equilibrium, it will always shift to the right. It will always shift to the right. The addition of inert salt to any equilibrium always shifts it in the direction of the most ionic state of the equilibrium. See how there's ions here and there are no ions here? That's the most ionic state. If I add salt, the reaction will shift to the right. It's called salting in. Now I can prove that. Here's the equilibrium constant for this reaction. Here are the activity coefficients. The activity coefficients for the neutrals are zero or one rather. The activity coefficient for this guy's one. These activity coefficients for the charged species are less than one. They will become lower as I increase the ionic strength of the solution. Those guys will get smaller. What that means is that the equilibrium constant that applies if we think about these activity effects is going to get bigger. If these guys get smaller, that gets bigger. That's the activity coefficient that applies when we add sodium chloride to the solution or any other inert salt. It's going to get more acidic. What about this guy? What happens to the solubility of lead sulfide if I add sodium chloride to the solution? Sodium chloride doesn't appear in this equilibrium. Why would it affect this equilibrium? And yet it does in a predictable way. What's going to happen? Is the lead sulfide more soluble or less soluble when I dump sodium chloride in? More. Why? Because, look, this side of the equilibrium has got a lot of ions. This side's got no ions. If I add more ions to the solution, equilibrium is favored by the addition of more ions. I can prove that's true by working out what the new equilibrium constant is. These are the two activity coefficients. They both get smaller when I add sodium chloride. So the equilibrium constant shifts to the right. Every time you see an equilibrium, you can affect its position by adding an inert salt or removing an inert salt from the solution. That'll also alter the equilibrium in a predictable way. What does all this have to do with transition state theory? Very simple to make a long story short. We can work out the math, but we're just going to skip over it here because we're almost out of time. Here's the bottom line. The transition state has the total charge of the reactants in it. If the transition state has a high charge, then the rate of the reaction is going to be influenced by the presence of other ions in the solution. At the end of the day, when we're done doing the derivation, we get this equation right here, which is the equation for the kinetic salt effect. What is this big K right here? It's just the collection of the activity coefficients for the transition state and the reactants. That is the rate constant that applies for one molar, everything. That's why it's got a zero on it. That's the actual rate constant of the reaction. This is the most important slide that has to do with this second concept here. What am I plotting here? This is the reaction rate for reaction that involves A with some charge plus B with some charge going on to products. This is the ionic strength. The key point here is we want to understand how salt affects the reaction rate. It's easy to understand how salt affects the equilibrium constant. How does it affect the reaction rate? The way to think about that is if A reacts with B and they both have a plus 2 charge, they're both positively charged, they both got a plus 2 charge, the transition state has a charge of plus 4. If there's an equilibrium between the transition state and the reactants, it's going to be favored by the addition of salt. The reaction is going to get accelerated. Isn't that counterintuitive? The reaction rate goes up as I add salt to the solution. Even though the reactants and products have the same charge, they have to overcome a coulombic barrier to react because they're both positively charged, the reaction rate goes up, not down. Check this out. If a 2 plus ion reacts with a 2 minus ion, you'd expect there to be a big coulombic attraction, right? Reactions should go faster. It goes slower. Does this have to do with the orthogonal components of the transition state's partition function? Absolutely not. No? No. Nothing at all? No. Nothing to do with the ionic development? Zero. Okay. It's not a bad question, but the answer's no. Does everybody see why this happens? There's an equilibrium. I'll say this one last time because I don't have any more time. There's an equilibrium between A and B, the reactants, and the transition state. What we just agreed on is that we can decide which side of the equilibrium will be favored when we add salt to an equilibrium. The most ionic state of the system is favored. If the charge on the transition state is higher than the charge on either one of the ions because the only way that can be true is if the ions have the charge of the same sign like plus 1 and plus 2, plus 2 and plus 2, plus 1 and plus 1, then the charge on the transition state will be like 3, 4, and so on. The transition state gets favored by the addition of salt and the reaction rate goes up. Look, it goes up here too. Look, it goes up here. So if there's no charge, nothing happens. And if the charges on the reactants are opposite to one another, then the charge on the transition state is lower than the charge on the ions. The transition state has a lower ionic, it's the least ionic state of the system. The reactants are more ionic than the transition state and the reaction is slowed down by the addition of salt. That's totally counterintuitive. If you understand that, you understand something that most chemists even are going to get wrong. You're going to get it right. You say those charges, that reaction is going to go slower. You can think about, okay. So on Friday, yeah, there's more here. Yes, oh my goodness. There's like 20 more slides that work through the equilibrium, we're not going to give to those. So on Friday, we're going to work on the final. Please take the course evaluations. All right. Thank you.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 26. Thermodynamics and Chemical Dynamics -- Transition State Theory -- Instructor: Reginald Penner, Ph.D. Description: In Chemistry 131C, students will study how to calculate macroscopic chemical properties of systems. This course will build on the microscopic understanding (Chemical Physics) to reinforce and expand your understanding of the basic thermo-chemistry concepts from General Chemistry (Physical Chemistry.) We then go on to study how chemical reaction rates are measured and calculated from molecular properties. Topics covered include: Energy, entropy, and the thermodynamic potentials; Chemical equilibrium; and Chemical kinetics. Index of Topics: 0:02:54 Where Does the Arrhenius Equation Come From? 0:04:34 Transition State Theory 0:11:16 Activated Complex 0:14:30 Equilibrium Constants from Partition Functions 0:23:25 Calculating the Partition Function 0:26:28 Vibration Along the Reaction Coordinate 0:32:06 The Eyring Equation 0:35:38 Calculating the Pre-Exponential Factor in the Arrhenius Equation 0:39:27 Activities 0:40:26 Debye-Huckel Limiting Law 0:42:02 Thermodynamic Constant 0:47:25 Equation for the Kinetic Salt Effect
10.5446/18961 (DOI)
Okay, how are you guys? Everybody have a good weekend? No? Why are you guys wearing ties? You have to? Why do you have to? Pledges. What are you pledging? Okay, you look fantastic. Alright, so we are, when I say we, I mean Stephen and Jean-Marc are grading the midterm. It's about done and they're going to post scores later on today. There's a key that keeps getting updated. The current version of that is on the web already. It's been updated a few times. So rumor has it that you did better than on midterm one, but I haven't seen the results yet, so don't put too much weight on that. It's helpful that you all did. So yes, yes, keys posted. Scores are going to be posted today. Exams are going to be returned as PDFs via rapid return, and I think you know that that's a misnomer. It ain't rapid. It takes four or five days or so. Okay, but that's, we'll get our return as fast as we can. I'm going to post a new how am I doing score today. I'll drop your lowest two quizzes and let you know how you're doing in the class. You'll be able to look at that score and see what kind of grade you have going into the final exam. Okay, there's one quiz left though. There's a quiz Friday, and it's really going to be on the stuff that we talk about today and also the stuff we talked about way last Monday. The steady state approximation, all the kinetic stuff that we've talked about really is covered by this quiz. Okay, we've already had one quiz that had kinetic stuff on it. Okay, so that's the very last quiz. Some of you probably have done well enough so you don't even need to take it because you can drop two quizzes. So if you've got five really good ones already, take the first 15 minutes off tomorrow. The final exam is going to be like this. It's comprehensive, but it's going to emphasize the kinetic stuff that we're doing here at the end of the quarter because the kinetic stuff hasn't been on a midterm exam yet. So half of the final exam is going to be kinetics, 25% thermal, 25% stat mech. But I'll break it down for you problem by problem. Alright, next week. So don't worry about that. I just want to say that the stuff that we're talking about now is going to be worth sort of 100 points on the final exam so it's rather important. Okay, so we're going to review the steady state approximation because I know that's not foremost in your minds anymore. You've been studying for midterm too. It was all about thermal. We're going to do an example. Now we're going to talk about the Lindemann Hinchelwood mechanism. We started to talk about this last Monday, but I'm sure that that's sort of a vague memory for you at this point. So we'll go back and look at it carefully. Okay, so the whole idea in the steady state approximation is that we want to simplify the mathematical expressions for consecutive reactions. Reactions where here's a reaction, here's another reaction that follows that one, there could be another one that follows this one. And in general, the reactions are going to be more complicated than this. I'll give you an example later on. But the basic idea is that the mechanism for sequential reactions like this, the mathematical expressions for the integrated rate laws get exponentially more mathematically complex as the mechanism gets larger. They don't increase linearly with the size of the mechanism. They increase exponentially. So we need a mathematical tool that allows us to simplify what these expressions look like. And that tool is the steady state approximation. We're going to use it to simplify several different types of reaction mechanisms, including the Lindemann Hinchelwood mechanism and enzyme kinetic expressions, which we're about to start talking about on Friday. So the basic idea is that we're going to set the time rate of change for all intermediates to zero. What's an intermediate? Well, an intermediate is something that shows up in a sequential reaction, but it is not a product, and it's not a reactant. So in this case, the intermediate is obviously B. We're going to set the time rate of change of B to zero, and then we're going to solve the simplified kinetic expressions that result from making this simplification. So, for example, we've set the time rate of change of B equal to zero, but what is that? Well, there's a rate of formation for B because B is formed at a rate equal to K1 times A, and there's a rate of consumption of B because B is turned into C at a rate of K2 times B, so there's going to be a minus K2 times B and a plus K1 times A, and that difference has to equal zero. So if that's the case, then obviously that has to equal that. That's what we're showing right here, and so I can then solve for B. This should say B, steady state. It should be a steady state subscript here. The steady state concentration of B is equal to K1 over K2 times A. Now, one of the things that we showed way last Monday, a week ago Monday, is that the steady state approximation is really only going to work when K1 is much, much less than K2. If you plot what the concentration of the intermediate is doing, you can convince yourself that only in this limit is the concentration of the intermediate going to be quasi-constant. Even in this limit, it's not perfectly constant, but it's quasi-constant. Consequently, if we make K, here we're saying the time rate of change of B is zero. In other words, B is not changing. Its concentration is not changing at all as a function of time. That's what we're assuming. In order for this expression to approximate this expression, K1 has to be much, much less than K2. Can everyone see that? K1 over K2 has to approximate zero. In order for this expression to make sense. Keep in mind that A is always going to go from its initial concentration to zero. According to this mechanism here, A is going to change a lot. In order for B not to change, K1 over K2 has got to be small, very small. Everybody with me on that? That's the assumption that we're always making somewhere in the steady-state approximation. Here's the case, here's the mechanism, here are the rigorous equations that describe the concentrations of A. Here's what's happening to A, just like I said. It starts off at some initial concentration and then goes to zero. Here's C building up, and here's the concentration of the intermediate. It's quasi-constant. Why? K1 over K2 is just 0.02. It's tiny. This is the limit where the steady-state approximation is going to work pretty darn well for us. Now we're going to solve the simplified equations that result. We said the steady-state concentration of B is K1 over K2 times A, and so we can then plug that in to the rate at which C is produced. Here's the rate of the reaction. In terms of C, it's K2 times B, and now I can just plug this expression in for B, boom, and the K2's are going to cancel, and so the rate of this reaction is just going to be equal to K1 times A. We know what the integrated rate expression is for A. It's just a first-order reaction with rate constant K1. This is what it's equal to if I work it out. If I want to know what the rate at which the product is produced, obviously this is a decaying exponential. It's doing that. It's doing that. That's their integrated rate law for A, and now I'm going to integrate that to get the concentration of C as a function of time, and this is what that integral gives me right here. So the C as a function of time is going to be given by that equation. We get really simplified integrated rate laws from the steady-state approximation compared to the rigorous expression, and the thing to keep in mind is this is the best-case scenario. This is the simplest possible sequential reaction mechanism. Two first-order reactions in sequence where there's only one reactant and one product in each. I think you can see there's already a pretty big difference in complexity between these equations and these equations, especially look at C. That mathematical expression is a lot simpler than this one, and as we make this mechanism more and more complex, these equations blow up exponentially. That's why we need the steady-state approximation. How well does it work? Here K1 over K2 is tiny again. Look how well it works. The dashed line is a steady-state approximation. The solid line is the perfect... are the rigorous equations for the concentration. Look at A, look at C. C is what we really care about. C is telling us what the rate of the reaction is. Steady-state approximation is also predicting B. You can see B is quasi-steady state. Concentration doesn't change that much. It's changing, but it doesn't change that much. If I make K1 over K2 bigger, things should get worse, and you can sort of see that it's getting worse. If I make it bigger yet, now to.9, you can't really see what's going on unless I blow this up, but when I blow it up, you can see that it's not doing a very good job of predicting B anymore. Here's what the steady-state approximation is doing. The steady-state approximation says B is some fraction of A, and A is changing a lot, and B is really not doing what the steady-state approximation is assuming. Now look at the difference between C. Here's the steady-state approximation and the actual... So it's starting to break down. We expect it to break down here. It's not surprising that it does that, and if we make this bigger yet, we get a complete train wreck. Here's the steady-state approximation for C. This is the dashed line here. Here's the solid line. Big difference between the steady-state approximation and what the concentration of C actually is. So we want to keep in mind the steady-state approximation doesn't always work. This is even worse. This is even worse than that. That's the steady-state approximation. That's what's actually happening. No, sorry. Bad idea to use it. Okay. So does everyone understand the steady-state approximation? Let's do an example. Here's a real-world example for the steady-state approximation. Here's the reaction mechanism. Three reactions. If I wrote down the actual kinetic rate expressions for this mechanism, it would take three screens. It's a nightmare of enormous proportions. Let's see if we can use the steady-state approximation to simplify what's going on here. First of all, what do we got to do? We have to find intermediates. If there are no intermediates, there's no point in using the steady-state approximation. The steady-state approximation only assumes that the time rate of change of intermediates is zero. Identify the intermediate, since we're actually using the steady-state approximation. So look at this guy right here in pink. He's produced by this reaction, consumed by this reaction. He does not appear as a product. He is by definition an intermediate. Right? What about this guy? That's another intermediate. So there's two intermediates in this case. Wait a second. Here's bromine. It's showing up here and here. Is bromine an intermediate? No. No. No. Maybe. No. Maybe. B is a catalyst. It's consumed that drives the reaction forward, but it's regenerated as a product. Without B, this reaction, without Br minus, this reaction grinds to a halt. With it, the reaction occurs at whatever rate is characteristic of the reaction. But Br minus is not consumed by the reaction, because it's regenerated as a product, even though it's consumed by the second step of the reaction. So Br minus is a catalyst. You have to be able to recognize that. It's not an intermediate, and you can't apply the steady-state approximation to its concentration. Okay? What about Br minus? No, it's a catalyst. So, apply the steady-state approximation to the following mechanism. The rate of formation of the product, this is the product we care about right here, by the way, its rate of formation is going to be equal to K3 times the concentration of this stuff times the concentration of this stuff. Everyone agree with that? Okay, so to apply the steady-state approximation, the first thing we do is we write down this rate right here. The rate of the reaction is the rate of the last step. Then, we look at these two concentrations and we ask ourselves, is that an intermediate? Is that an intermediate? That's not an intermediate, that's a rate constant. Or either of these species intermediates, yes, that guy is. We agreed. That guy is an intermediate. There he is, and there he is. He's getting generated by this step and consumed by this step. So, ONBR is definitely an intermediate. Let's apply the steady-state to its concentration, steady-state approximation. To do that, we set the time rate of change of that species to zero. Then, we write a rate expression for it. We look at the mechanism. Here's the mechanism for the reaction and we ask ourselves, how is ONBR generated? Well, where's ONBR? It's generated from this reaction right here. So, I've got H2NO2. That concentration times BR minus, with a rate constant of K2, is generating ONBR. Boom! That's what this term is right here. It's the rate at which this guy is getting generated by what? By this step right here. What's this guy? This is the rate at which the ONBR is getting consumed by this reaction right here. So, that rate is the steady-state concentration of ONBR times the concentration of this stuff, times K3. So, there's a minus sign in front of this guy and a plus sign in front of this guy. That's the generation rate. That's the consumption rate. Everyone see that? We're going to set that and that have to be equal to one another if the steady-state approximation is valid. So, then I'm just going to solve for ONBR, the steady-state concentration of ONBR from this expression right here. I can move this guy over to the left-hand side, divide through by K3, and the concentration of this stuff. Boom! That's the kinetic expression for my steady-state concentration of ONBR. And then I just plug that in. We already said the rate of the reaction is equal to this expression right here. Plug that into this expression. And I've got a steady-state approximation expression for the rate of the reaction. So, I just took that whole thing there, stuck it in here, and so now I've got K3. That's the K3 right there. Times this whole mess, that's right here. Times the concentration of our product, that's this. Boom! That's our steady-state approximation integrated rate law. Differential rate law. Now, I'm looking at this, and I'm asking myself, can I simplify it further? We've applied the steady-state approximation once. Looking at this guy, are there any intermediates among these species here? I guess I could have canceled that and that. But it's the same. Alright. Is BR an intermediate? No, we agreed it was a catalyst. Is H2NO2 plus an intermediate? Oh, I did cancel like that. Good. Okay, now, are either one of these two guys intermediates, this guy could be, right? Where is he? He's right there, and he's right there, so he is an intermediate. So we can apply the steady-state approximation to his concentration too. We already applied it to this guy, now we can apply it to this guy. Okay, good. So let's ask again, are there any? Yes, yes, yes. So we said it's time rate of change to zero. This is the rate at which it's generated. There are two processes that consume it. Let's look back. So it's generated by the reaction of protons with NH02 with a constant K1. What did I do? Sorry. Protons, NH02 with a constant K1, this is the rate at which this stuff is generated, and then there's two processes that remove it from the system. One of them is the back reaction here. So the rate of that is just given by H2NO2 plus times K to the minus one. That's this guy right here. And the other one is this forward reaction here, H2NO2 plus reacting with Br minus with a constant K2, and that's this guy right here. So these are consumption rates, this is the generation rate for this species. And then what we're going to do is we're going to solve for this guy. And if you do that, this is the expression that you get. And then what we do is we plug this into our original steady state expression that we got after we applied the steady state approximation the first time. Now we've applied it a second time. So I'm going to plug this mess in for this species here. There it is. And is there some cancellation? No. That's it. That's our steady state rate law. It's a differential rate law. Now there's going to be two limiting experimentally observed rate laws. Looking at this guy, it's possible that this reaction will appear to have two different rate laws associated with it because of this denominator here. If this term dominates, we're going to see one thing, and if this term dominates relative to this, we're going to see something else. Let's think about that for a second. Consider first the case where this guy, K2 times Br minus, is much larger than K to the minus 1. If that's true, then we can neglect K to the minus 1 in this denominator because there's an addition here. If one term is much, much larger than the other, then this will have an insignificant effect on the total size of the denominator. So what we're looking for are addition operations within the rate law. There's one right there that tells us that there's the potential to have two distinguishable rate behaviors. If K2 Br is much, much bigger than K to the minus 1, then this guy simplifies to this. We just leave out the K to the minus 1. The K2's cancel, the Br minus cancels, and we end up with a rate, that's the rate right there, equal to K1 times the proton concentration times NH02. That's the first thing that can happen. We would predict that this would be observed, for example, at high bromide concentrations. If I make the bromide concentration higher and higher and higher, at some point the reaction should stop depending on bromide. Shouldn't see any dependence at all anymore, and in that limit, this rate law should apply to the reaction. The other possibility is that K2 Br is much, much less than K to the minus 1. If that's true, then I leave this guy out, and I only keep the K to the minus 1. In this limit, the reaction does depend on Br minus, and I get an effective rate constant that bundles together these three rate constants right here. This K effective would be K1 times K2 divided by K to the minus 1. Everyone see that? So I would expect to see a transition from this behavior at low Br concentrations to this behavior at high Br concentrations. We should see the reaction rate stop depending on Br minus in the limit where this rate law applies. Everyone see that? This will always happen when there's an addition operation somewhere in your rate law. There will always be the possibility for limiting behaviors. Okay. Now, we're going to look at a different application of the steady state approximation. We're going to talk about the mechanism of this reaction right here, unimolecular reaction of some species, call it A. Now, if you look at this reaction right here, A plus B gives you products. It's completely intuitive how this reaction might occur. Imagine, for example, A and B are gas phase species. A is zooming around in the gas phase. You've got B zooming around in the gas phase. A and B collide, generate an activated complex, and then form products. There's a collision of A with B to form some activated complex that breaks up to produce products. Bimolecular reactions have an obvious mechanism in the gas phase. A collides with B to generate some kind of activated complex of A and B, and then this breaks up to give products that have some A character and some B character associated with them. This collision between A and B generates an energized transition state. For now, these are just words. We're going to talk about transition state theory later on. It's going to be important, but right now, it's just conceptual. Now, the basic idea is that this transition state here is located on this reaction coordinate here between substrate, which are these guys, and product, which are these guys. It's located halfway in between. It's located, in fact, right at the peak of this energy level diagram. But once again, we're going to make this more quantitative right now. It's just conceptual. That's how this happens. It makes perfect sense to us. A collides with B to give products. How does that happen? There isn't any B. There's only A. What kind of reactions conform to this reaction mechanism? What kind of reactions do this? A just reacts. Well, there's two main kinds. Unibolecular reactions are either isomerizations. Here's resveratrol. Remember that, the stuff that's found in red wine. It's so delicious. Impedes the growth of cancer in your body and has all kinds of other benefits for you. If you ingest kilograms of it. And decomposition reactions. There's A, it falls apart into B and C. A just falls apart. That's a decomposition reaction. That's unimolecular. The reaction happens just to this molecule by itself. It falls apart. This molecule here oscillates between two different chemical states because of isomerization around that double bond. Right there. And then there's two primary types of unimolecular reactions. How do they happen? How do they occur? How do we understand what's going on? We need a reaction mechanism. And these two guys proposed one in the early 1900s. Lindemann was an English guy, a physicist, who did a lot of stuff for the British government during World War II. He was the science advisor to Winston Churchill. And he was a really arrogant guy, who most people hated. But one of the things that he did is he worked out this reaction mechanism and then somebody who he didn't even know, Hinchelwood, another English guy, came along and worked out all the mathematics and made it quantitative. Lindemann cooked up the mechanism, Hinchelwood refined it. And Hinchelwood ended up getting a Nobel Prize. Not only for this. I mean, Hinchelwood did other stuff too. But this is one of the things. Okay, so for this reaction, the Lindemann Hinchelwood mechanism basically postulates that this reaction occurs in three steps. One, A collides with itself. When that happens, you generate activated A and ground state A, non-activated. This reaction can go back. That's what this is right here. Activated A can collide again with another A. So here's A colliding with A to produce activated A. Activated A can collide with A again to become deactivated. That's what this is. Here's the collision between activated A and ground state A. I get two ground state A's. There's no more activated A. And finally, activated A can fall apart into B or can turn into B. This looks like a decomposition mechanism. A's decomposing into two B's. Okay, so we can apply the steady state approximation to this guy. The basic idea and the most important assumption of the Lindemann Hinchelwood mechanism, and it's assumption that turns out to be wrong. The assumption is that A star possesses an internal energy that exceeds the activation energy. In other words, if there's a bond that has to break in A to form two B's, A star has enough energy within it to break that bond. That turns out to be a bad assumption because not all collisions are going to generate an activated A that has enough energy to break up necessarily. You can imagine there's going to be a wide distribution of collision energies because A can hit itself at a small angle and transfer very little energy to itself. That's one way in which A star could have a smaller amount of energy than it needs to react. This is called the strong collision assumption of Lindemann Hinchelwood theory. The strong collision assumption is that these collisions that occur here generate an activated A that has enough internal energy to go on and do the reaction that you care about. It's either a decomposition or a disomerization. Now, can we apply the steady-state approximation to this mechanism right here? Yes, we can because there's an intermediate right there. The activated A is an intermediate. Here's our reactants, here's our products. A is formed and then consumed. So it only exists transiently within the mechanism. That's the definition of an intermediate. What are our rate expressions going to be like? Well, the rate at which A star is formed, here's the generation rate, and there's two processes that consume it. The generation rate, of course, is that. A squared times K1. And then how is A star consumed? There's two things that consume it. There's this reaction here, A star times A with a rate constant K minus 1. That's just the reverse of this, of course. And A star can react with a rate constant K2. And so both of these two processes use up A star and so they have a minus sign in front of them. Minus sign here because A star is used up by that guy. Minus sign here because A star is used up by that guy. Plus sign here because A star is generated by this guy. So you've got to be able to write these rate expressions by looking at the mechanism. Then all we do is set this equal to zero. Well, these are the rate expressions for the other species, A and B. And then we set this guy equal to zero, don't we? That's what the steady state approximation is. We set this guy equal to zero and then we solve for A star. So if this is equal to zero, then I can set all the positive terms equal to all the negative terms. Now I'm being more careful. I'm putting this little steady state subscript on the A star concentration because now we're talking particularly about the steady state concentration of A star. And I can just solve for A star in this expression here to get this guy right here. And so once I've got that, then I can just plug A star into the expression for the rate at which B is produced. So I just plug this whole guy in for A star, boom, there it is. And that's my rate law for the Lindemann-Hinshelwood mechanism. What does it predict? So look at this guy. You see how there's an addition operation in the denominator? That means there's going to be two limiting behaviors depending on which one of those two terms is bigger or smaller. That's the first thing to notice. So we're going to talk about those two limits. If A is big, in other words, K minus 1 times A is much larger than K2. This would be in the limit of high pressures of A. At high pressure, remember there's only one reactant in this thing. You've got a vessel with A in it in the gas phase. If you put a lot of pressure of A, that's the limit that you're in. This guy right here. This guy is much, much bigger than K2 in that limit. And one of these A's cancel with one of these A's. So I'm left with one A in the numerator, no A's in the denominator. And if you look at this, it's just I can cluster together these rate constants in that too to get an effective rate constant times A. So the rate of this reaction, if it conforms to the Lindemann-Hinshelwood mechanism, is going to look like this. It's going to be apparently first order in A. That's what you expect just looking at the mechanism. It's a unimolecular reaction. You'd expect it to be naively. You'd expect it to be first order in A. That is at high A pressures. Now, what does this mechanism mean mechanistically? Well, what does it mean for K-1A to be much, much bigger than K2? It means that this reaction is fast compared to this reaction here. The rate of deactivation at high pressures. In other words, you create an excited A, but it collides right away with a ground state A to get deactivated. In that limit, you get a reaction that is first order in A. Now, that's true for large A. This is what we get. What happens if A is small? What happens at small pressures? Low pressures for A. Well, obviously, this guy is going to be much smaller than that, but if you get rid of him, where is he? I've just got K2 now in the denominator. I don't have any A in the denominator canceled, so that's still A squared. This K2 here cancels with that K2 right there, so I'm going to have 2K1 times A squared. It's going to act like a second order reaction. At low pressures, it's going to look like a second order reaction. Bizarre. It's just A falling apart into products or a summarizing, but that reaction is going to act like a second order reaction. Weird. What does this mean mechanistically? Well, if that is much less than that, that means that this reaction is fast, this reaction is slow. The rate of deactivation at low pressures, the rate at which if you form an excited A, it's not very likely to get deactivated. There aren't very many collisions to deactivate it. It's more likely to fall apart. Once you form this guy, it's more likely to undergo this reaction than it is the deactivation reaction at low pressures. That's what it means. So this plot should make sense to us. Here's what the Lindemann-Hinshelwood mechanism predicts. Let's look at this and make sure that we understand this, because this is really everything in a nutshell here. What's on this axis? This is the pressure of A, log pressure of A. What's on this axis? This is the log of the rate constant, the effective first order rate constant. What do I mean by that? If I pretend that the reaction has this form, then this rate constant here, if I apply it to this case right here, it's going to be 2 times K1 times A is going to be equal to K effective. Can everyone see that? In other words, if I assume that the reaction always has this form, the reaction rate always has this form, then I can write an expression for K effective. K effective in this case is going to be equal to 2 times K1 times A, because that's what it would take, if K effective would have to be equal to that in order for K effective times A to give me this. So this is the log of the rate constant. This is the log of the pressure of A, and what this plot says is, at low pressures of A, the effective rate constant depends on A. It goes up as a function of the pressure of A until it gets to a point where there's no dependence of the effective rate constant on A at all. We just arrived these two cases. Here the effective rate constant is 2 times K1 times A. It does depend on A. It goes up linearly with A. That's what this shows right here. This rate constant here doesn't depend on A at all. It's completely A independent. Now, of course, the rate of the reaction does depend on A up here, because the rate is K1 times A. But if I take the rate constant, and I'm only plotting the rate constant here, I'm not plotting the total rate, here the rate constant becomes constant. It's equal to this collection of rate constants from those individual steps. Here the rate constant depends on A. This is classical behavior that's modeled by the Lindemann-Hinshelwood mechanism. At low pressures, the reaction rate constant, the effective rate constant, depends on A linearly, and at high pressures, it does not. Here the reaction is acting like a first-order reaction. Here the reaction is acting like a second-order reaction. The Lindemann-Hinshelwood mechanism explains that. It doesn't get it exactly right, but it comes close. Did everyone get that? Now, here's our Lindemann-Hinshelwood rate. Here's the expression that we derived earlier. Let's recast this equation. If I assume the rate looks like this, in other words, I'm defining an effective first-order rate constant, just like I did for this, exactly the same way I did for this plot. I can then say that K effective, if this is true and this is true, then that K effective has got to be equal to all this nonsense, including one of these A's. Can you see how that's A squared? I'm going to take one of those A's, the other one is right here, so if this is K effective and I multiply by A, I get that. This is my effective Lindemann-Hinshelwood first-order rate constant. I can just take the reciprocal of this, so I can take one over K effective. I move these two guys into the numerator, there they are, I can split this into two terms. I can put the K minus one here, and if I do that, the A's are canceling, and I can put the K2 here, and if I do that, the K2's cancel. This is my simplified expression if I take the reciprocal of this guy. Everyone see that? This now is my Lindemann-Hinshelwood equation, if you will. What it says is if I plot one over K effective versus one over A, either the concentration of A or the partial pressure of A, I should get a straight line, I'm plotting that, versus one over A, I should get a straight line with the slope of one over 2 times K1, and a positive intercept. Everyone see that? So that's how you tell whether your reaction is conforming to the Lindemann-Hinshelwood mechanism. You make that plot. What is it? One over K versus one over A. One over K versus one over A, this is one over the partial pressure. How well does it work? Well, not that great. It's working pretty well at low pressure. This is one over pressure, so if it's working well here, that's low pressure. The Lindemann-Hinshelwood theory predicts a rate that is too low. What we predict, this is one over the rate, it predicts a rate that is too low at high A. This is high A because this is one over A, this is very confusing. This is high A, low A. High rate, low rate, because it's all reciprocal, it's a totally reciprocal plot. So the Lindemann-Hinshelwood theory predicts a rate that is too low at high A. There's a good reason for that. We'll talk about it later on. But where we expect it to work is at low A. We expect a positive intercept. You want to see that? We are looking for linear behavior down here. We expect it to roll off here. Lindemann-Hinshelwood theory has a well-known defect that we're going to understand in detail later, but for now, expect this guy to roll off like this. At high A, or low one over A. Okay? So all of this is based on our application of the steady-state approximation to the Lindemann-Hinshelwood mechanism. Actually, the Lindemann-Hinshelwood mechanism uses the steady-state approximation whether we want to or not. Okay. It's kind of deep and confusing. So that's what this is. Now, we're going to talk about one more case. So what were we just talking about? We were talking about unimolecular reaction mechanisms. How do unimolecular reactions occur? You've got a molecule, it's just falling apart. What's the mechanism for that? It's this Lindemann-Hinshelwood mechanism. We can understand that mechanism in terms of the steady-state approximation. Derive equations. What other reactions are going to be useful to look at with a steady-state approximation? It turns out that enzyme reactions are another case where that's true. And that's why reactions of this form are important. Reactions where a pre-equilibrium is established within the reaction mechanism. Pre-equilibrium. What am I talking about? Look at this. A reacts with B to give some complex of A and B. That's going to be the enzyme substrate complex. A and B can fall apart to give A and B again. So A reacts with B to form complex A and B. Complex A and B falls apart to give A and B separately again. There can be an equilibrium that involves this forward step and this reverse step. That's the pre-equilibrium. Then A and B can fall apart to give products. This looks like an enzyme reaction, doesn't it? Substrate reacts with enzyme to form enzyme substrate complex. Enzyme substrate complex falls apart to give enzyme N substrate. Or a reaction occurs and generates product. But there's lots of other reaction mechanisms that also adhere to this pre-equilibrium model. Oh, that slide is out of... Yeah, this is... This is a problem where you're supposed to plot the data and find out whether the reaction conforms to the Lindemann-Hinshelwood mechanism. So what is this? This is the pressure of some gas. Call it A. This is the effective rate constant that we're measuring for the effective first order rate constant that we're measuring for the reaction of this gas to form products. It could be an isomerization reaction. It could be a decomposition reaction. Some unimolecular reaction. So the question is, does this data conform to the Lindemann-Hinshelwood mechanism? How do you tell? Well, you have to make a plot of 1 over K effective versus 1 over partial pressure. And see if there's any linearity in that plot at low pressures. Remember? And so you take 1 over P. You take 1 over K effective. So you take each one of these guys, take the reciprocal, and now you plot them. Here are these data points. Are the plot. Does this look like Lindemann-Hinshelwood mechanistic behavior? Sort of. I mean, it's really ugly. It's sort of linear at low A. And it has this deviation that we're expecting. We would hate to see this on a test because it's sort of nebulous, whether this is really conforming to the Lindemann-Hinshelwood mechanism or not. It's not very linear. I should really concoct a data set that looks better than this. But that's what you would do. The point is that you make this plot. You take these reciprocals, you plot this versus this, and you look for linear behavior over here. You're not getting really good linear behavior in this case. This is real data, actually. That's part of the reason. One thing you'll learn if you ever have to do experiments is that experiments never or very rarely adhere perfectly to the theory that you're trying to apply to them. So this is a case of that. So getting back to the pre-equilibrium, we can use the steady-state approximation again. And we'll do that more on Friday. So that's what quiz seven is going to be about, steady-state approximation.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 23. Thermodynamics and Chemical Dynamics -- Lindemann-Hinshelwood Part I -- Instructor: Reginald Penner, Ph.D. Description: In Chemistry 131C, students will study how to calculate macroscopic chemical properties of systems. This course will build on the microscopic understanding (Chemical Physics) to reinforce and expand your understanding of the basic thermo-chemistry concepts from General Chemistry (Physical Chemistry.) We then go on to study how chemical reaction rates are measured and calculated from molecular properties. Topics covered include: Energy, entropy, and the thermodynamic potentials; Chemical equilibrium; and Chemical kinetics. Index of Topics: 0:00:06 Lindermann-Hinshelwood 0:03:46 The Steady-State Approximation 0:21:26 Two Limiting Experimentally Observed Rate Laws 0:24:40 Elementary Reactions 0:31:10 The Strong Collision Assumption 0:38:04 The Kinetics of Pressure-Dependent Reactions 0:45:44 Reactions Where a Pre-Equilibrium is Established
10.5446/18948 (DOI)
Okay, how are you guys doing? You got those mid-quarter daldrums. I know how you're feeling. Okay, this is a chapter 16 topic. We're really cranking away on chapter 16 here. Edging our way towards the end of thermodynamics, which I hope will come probably in the middle of next week. Now we posted the quiz scores this morning. This was the hardest quiz. It might have been too hard. First question was pretty easy. Didn't you think? So these guys got A's. These guys got B's. There's a few C's here. We usually don't have hardly any C's. Okay, quiz five is Friday. The first two are supposed to be easy. The first two problems. In this case, the second one was actually not so easy. It was the body temperature one. Okay, I also posted the key in case you want to look at that. So we won. And what's even sweeter is we beat SC. Very, very sweet. I'm always amused by the fighting anteater. The anteater is the most docile animal that we know of in the animal kingdom. But to dress him up for the athletic department, here he is fierce. There's no such thing as a fierce anteater. In nature, they're not known to be fierce. Okay, so we're going to review a little from Friday. Why does that say Wednesday? We're going to talk about how the Gibbs free energy varies with temperature and pressure. We'll do a couple of examples. Just sort of easy ones to ease into this subject. Okay, we'll do a bunch more examples on Wednesday. All right, so on Friday, not Wednesday, we said, look, there's three types of systems. Okay, and uniquely for this guy right here, it's an isolated system. There's no energy or matter exchanged with the environment, the surroundings. We don't have to consider anything except the system when we think about the spontaneity of processes that occur within it. Okay, it's blocked off to the surroundings. It doesn't even know about them. And so we can say any process that has an entropy, positive entropy change, is going to be a spontaneous process for an isolated system. We don't even have to think about the surroundings. They're not part of our thought process. But we don't have isolated systems in chemistry too often. They're almost always in communication with the environment. And so we have to consider open systems and closed systems as well. And in those cases, because there is communication with the surroundings, it's the total entropy, surroundings plus system that matters. Okay, in terms of figuring out if this process is spontaneous. Notice that the focus is totally on entropy. We're not saying anything about the energy. The energy can do anything it wants. We're only focusing attention on the entropy to understand whether these processes are spontaneous or not. Okay, so we've got to have this term in here for the surroundings. We didn't need it for the isolated system. Okay, so now we're just going to do a little algebra. We're going to move the surroundings over to the right hand side, put a negative sign in front of it. And then we're going to remember that dS is minus dQ over T. And so we can make that substitution for the surroundings right here. And then we're just going to remember that Q is a conserved quantity. In other words, if plus Q is heat entering the system, that heat had to come from the surroundings. And so the surroundings has to be minus dQ. There's conservation of Q. And then we have to think back and remember that dU is dW plus dQ. And so we can just solve for dQ in this expression. dU minus dW. And if we plug that guy in for dQ, we get this guy right here. And if we consider only pressure volume work, this dW is PDV. Okay, and so then we're going to multiply by T surroundings and move it over to the left hand side. So we get rid of this T surroundings now. Now it's over here. And we just have dU plus PDV. We're going to drop that cis subscript. If you don't see a subscript, just assume we're talking about the system. Okay, so this is the pink equation. It only took us about four steps to get there from conservation of entropy. Or not conservation of entropy from entropy dictating which process is spontaneous or not. We kept coming back to this pink equation on Friday in deriving different thermodynamic state functions from it. In fact, we showed that if the process occurs under conditions of constant volume and constant entropy, then it's the internal energy that tells us whether the process is spontaneous or not. And if instead the process occurs under conditions of constant pressure and entropy, why it's the enthalpy that's going to tell us whether the process is spontaneous or not. But unfortunately, we don't encounter these two sets of conditions very often. It's virtually never the case that the entropy is constant. If you're a chemist, you can ask, you know, how do you do that? How do you do an experiment with constant entropy? I don't know the answer. So when you're doing an experiment and you want to understand whether it's spontaneous or not, the chemistry that you're looking at, it's unlikely that you're going to be paying attention to these two variables to figure that out. They're not going to help guide your decision-making process in figuring out whether your chemistry is spontaneous or not. That's what we care about here. So we need some other state functions. And we talked about one, all right? The Helmholtz energy, all right? In chemistry, temperature is frequently constant but not the entropy, all right? Constant temperature, lots of constant temperature chemical processes that we can think about. So let's consider the case where Dt is zero and the volume is zero. We're going to need for two things to be zero. Otherwise, we're not going to end up with a state function. And so we'll also define a new state function A, which is going to be called the Helmholtz energy. It's going to be defined as the internal energy minus Ts. And so if we take the derivative now to get Da on the left-hand side, we're going to get Du and we're going to have Dts. And so we can split that into two terms, Dts minus Sdt. And then we can just solve for Du. And this expression, Du is going to be equal to Da plus Tds plus Sdt. And of course, the next thing that we do is we plug this thing into the pink equation, put this expression for Du into the pink equation, and then look for the terms that are going to cancel for us, all right? So we've got Da, Tds, Sdt, and our expression for pressure volume work from the pink equation. Okay, and the first thing that we notice is that we got Tds here. We got Tds here. Now these two Ts are different in principle. This is a T for the surroundings. This is a T for the system. But as we converge on equilibrium, these two temperatures will become very, very close. And under those conditions, we can expect these two terms to cancel for us. And then under conditions where we said Dt is 0 and Dv is 0, there's Dv. We can cancel out that term. And Dt, we can cancel out that term. All right, and we're just left with Da. There's nothing else left. And so it's going to be Da is less than 0. And so this Hemmholtz energy is going to be a state function that we can use to tell whether the chemistry that we care about is spontaneous or not when temperature and volume are held constant. And in the laboratory, we can enforce those limits. We can do an experiment at constant temperature, maintaining the volume constant, let the pressure do whatever it wants. Okay, we need one of these things. All right, it's got a defined volume and it's built like a tank and so even if the pressure changes a lot, we're going to enforce constant pressure. And in principle, it's the Hemmholtz energy that will tell us whether a reaction in this parbomb is going to be spontaneous or not. Right, we would want to, if we do an experiment in here, we would want to use the Hemmholtz energy to figure out whether it's spontaneous or not. Now, if you do undergraduate research, how many people have done undergraduate research? How many people have seen a parbomb? All right, a few of you, who do you guys work for? You all work for the same person? Oh. He's got a parbomb in his lab. Okay, so it's not completely impossible that you would use one of these things, right? They are in rather common use but I would say probably 99.9% of all the chemistry that we're likely to do is not going to be in a parbomb. All right, 99.9%. So, we need a different thermodynamic function. The Hemmholtz energy is fine but constant volume is inconvenient for us to use because we needed parbomb to do it in many cases. In chemistry, it's even more useful to make predictions about processes occurring at constant pressure and temperature because that's dead easy. Right, we live in an environment of quasi-constant pressure. Okay, and so we can do chemistry that's open to the environment and make predictions about whether it's spontaneous or not. To that, we're going to use something called the Gibbs energy. All right, we're going to define it as H minus Ts. Okay, we're going to do the same kind of algebra we did before, Dg is DH minus Dts and so we've got two terms here now. And then we're going to think back to Friday when we wrote an expression for DH. We said DH is Du plus PDV plus VDP. Okay, and so we can just plug that in for DH here. Now, we've got this long thing here that's equal to Dg. Okay, and so once again, we're just going to solve for Du. All right, put all of this other stuff on the right-hand side. And then once we've got Du, we're going to just plug it into the pink equation again. There it is. Put all of this stuff in for Du. We get this long thing here. Okay, and some of these terms are going to start canceling for us. As usual, PDV, PDV, right? Remember these two pressures are not in principle identical. That's the system pressure. That's the surroundings pressure. All right, but in the limit of equilibrium, they will be the same. All right, Tds, Tds, same idea there. And then because we're talking about G, we're going to make P constant. And so we're going to lose that guy. And Dt, so we're going to lose that guy. So everything cancels out here except Dg, which is going to be less than or equal to zero. And so that's going to be the state function that we're going to want a key on most of the time as chemists. All right, now if you're a physicist, if you're some other kind of scientist, these other state functions might be more important to you under other sets of conditions. But for chemists, it's all about the Gibbs free energy. The Gibbs energy, we're not supposed to call it the Gibbs free energy anymore. It's just the Gibbs energy. Okay, now I know that's tedious. But this is important, right? This is actually one of the more important things in thermodynamics that we need to be able to understand. All right, here's where we're doing chemistry. And this guy right here, we're open to the atmosphere and the Gibbs function is going to tell us whether this blue stuff here is going to react spontaneously. All right, we don't need the par bomb. Okay, so today and last Friday, we've taken the consideration, we've taken the condition for spontaneous change for non-isolated systems. We consider the total entropy change, system plus surroundings. That's got to be greater than or equal to zero. And from that, we derived all of these different conditions that apply for these different constraints. Volume and entropy, temperature and volume, pressure and entropy, temperature and pressure. We've got these four different. And what I've told you today is look, these two are not super useful to us. As chemists, these two are more useful. And this one is way more useful than that. All right, we derived these all. We didn't have to assume anything. Very proud of that. It's hard. Okay, now these conditions here also serve to tell us whether this system is proceeding towards equilibrium. It not only tells us whether the chemistry is spontaneous, it'll tell us whether the system is proceeding towards equilibrium or not. For example, what I'm plotting here is the Gibbs energy for some chemical process. And on this axis, I have the reaction coordinate. So this represents reactants right here and this represents products. This is 100% products. This is 100% reactants. But as you move along the axis in this direction, we're converting reactants into products. That's what I mean by a reaction coordinate. Sometimes we'll call this reaction coordinate X or chi. Okay, reactants getting converted to products, very generic. What does DG that should be P and T? How did that happen? Less than zero. There it is, P and T. All right, so first of all, this difference here between the reaction and product Gibbs functions, that's the delta G reaction. Makes sense. Now, let's consider a process that starts right here and ends right here. Or we can ask, is such a process going to be spontaneous or not? Well, we have a criterion here if we just change that V to a T. All right, we know DG should be less than zero. Okay, and so we can say G final minus G initial is that DG final minus initial. Is that going to be less than zero or greater than zero? What do you think? Yeah, it's a small number minus a bigger number, and so that difference is going to be negative, isn't it? All right, and so we would predict that's a spontaneous process. Yes, DG at constant T and P is less than zero. What about this guy? Same conclusion. What about this guy? No, final minus initial is going to be a positive number now. All right, so that's not going to be a spontaneous process going from here to here. No. All right, and what about that guy? Yes, final minus initial is going to be negative again, and so that should be spontaneous. All right, so basically what we're concluding is that if you're over here, we're going to go spontaneously downhill in this direction, and if you're over here, you're going to go spontaneously downhill in this direction, and that this minimum here in the Gibbs energy is going to indicate the equilibrium position of this reaction. It's the point where DG over DX, where X is now my reaction coordinate, is equal to zero. At that point, there's no more driving force for spontaneous change. We're at equilibrium. Okay. Now, amongst these four thermodynamic potentials, U, H, A, and G, G will be by far the most important to us. Yes, yes, yes. How does G depend on temperature? All right, how does the Gibbs energy depend on temperature? Well, that's a rather important thing for us to understand, because as chemists, if we want to accelerate a reaction, G is going to tell us whether the reaction is spontaneous or not. All right, we want to understand how temperature will influence that spontaneity. G is equal to H minus TS. We know that, and so we can take the derivative immediately, DG, DT. Even I can take this derivative. I get minus S. Okay, and so what this tells us is two things. First of all, since we know that S is always a positive quantity, there's no such thing as negative entropy. S is always a positive quantity. All right, that tells us that G has to decrease with increasing temperature, because that derivative is always going to be negative. All right, that's kind of surprising. The Gibbs energy is going to go down as the temperature goes up. That's counterintuitive. Don't all energies go up when you increase the temperature? Not this one. All right, the Gibbs energy goes down as you increase the temperature. Not only that, but the rate of change of G with temperature is greatest for systems having high entropy. The higher the entropy, the greater the change in G is going to be with temperature. Well, what kind of systems have high entropy? Well, gases, my laser pointer's dying, gases have the highest entropy. And so the rate of change of the free energy with temperature is going to be the highest, then liquids, then solids. Solids have the lowest entropy. Okay, so this plot is right out of your chapter 16. Gases, biggest slope, right? Here's the Gibbs energy on this axis. Here's temperature. It's going down. For every single one of these guys, it's going down, right? But it's going down at a rate that depends on the state. Gases show the largest decrease in Gibbs energy with temperature. Liquids next, solids show the least. Okay, so a couple things are, I mean, one thing that's surprising for sure is that the Gibbs energy goes down with temperature. All right, it's an unusual energy, isn't it, that goes down with temperature. Okay, so we can evaluate this derivative. And then we can go back to this equation right here, and we can just say, we can solve for minus s. All right, if I solve for minus s here, I'm going to get g minus h over t, just solving for minus s in that equation right there. Okay, so I've got dg over t at constant p is g minus h over t. And then we can rearrange that, just split this into two terms, move g over t to the left-hand side. I don't know why we actually did that, because I don't think we need this result here. All right, maybe we'll come back to this in a second. But let's just look at this for a second. This is the derivative of g over t. I don't know, that has anything to do with you. This is just the derivative of g over t. If I use the quotient rule to evaluate this derivative, I've got 1 over t times the derivative of g with respect to t, and I've got g times the derivative of t with respect to t, right? Two terms in my quotient rule expansion. Okay, now, the derivative of 1 over t is just minus 1 over t squared, right? Okay, and so that's that derivative right there. And this guy, if we factor out 1 over t, so I'm going to pull the 1 over t out of both of these two terms and put it right there, all right? Now I've got this expression here, and that is just the entropy, right? The derivative of the Gibbs energy with respect to t at constant p, that's the entropy. Okay, and so I can plug that in to this expression right here. I still got g over t, and then I can move, maybe that's why I did it. No, all right, forget that. This, all right, is just plugging in for g from that equation two slides ago. Okay, and so this is s over t. This is s over t, so we're going to get rid of the s over t. We're just going to be left with h over t squared, all right? And this is your equation 15.62b. This is the Gibbs-Hammholtz equation, which is important because it allows us to measure h by looking at the temperature dependence of g. And the temperature dependence of g is something we're going to be able to measure experimentally, all right? And so we can get h directly from that using this Gibbs-Hammholtz equation. Now, if this is a delta h and that's a delta g, this equation still is fine. Okay, so now let's ask some questions about the Gibbs function. We already asked a question about the temperature dependence. We said the Gibbs function goes down with increasing temperature, surprising. The rate at which it goes down depends on the entropy, all right? Higher the entropy, the faster the temperature rate of change of the Gibbs function. What about pressure? All right, we've got this expression here for dg. And if we want to look at this at constant temperature, we can say dt is zero, all right? And so that term is just going to go away. We've got dg is Vdp. And so to find out what the free energy, free energy, the Gibbs energy change is, there's the Gibbs energy at p final minus the Gibbs energy at p initial. I can just integrate this Vdp, all right, from initial to final pressure. And if I know what that is, if these are molar quantities, of course, there's going to be an m there, an m there, and an m there, all right? If these are all molar quantities. And so this equation is not super useful to us unless we know how this volume changes with pressure, all right? But one thing that's obvious is for phases like solids and liquids that are essentially incompressible, Vm is virtually constant, independent of pressure, all right? There's not much compressibility of a solid phase or a liquid phase, right? And so there isn't much pressure dependence of Vm. And so we can write a simpler expression. If we can pull Vm into the front of this integral sign, right, because it's constant, then the integral just turns into this. And we've got an extremely simple expression that allows us to evaluate the pressure dependence on the Gibbs energy, right? It's just the molar volume times the change in pressure. So if you look, once again, this is the Gibbs energy on this vertical axis here. And this is the pressure on this horizontal axis, right? And for liquids and solids, you get virtually a horizontal line. Because they're incompressible, right? Their volume doesn't depend on pressure very strongly. Interestingly, as the pressure gets higher, the Gibbs function goes up a little bit. All right? With gases, there's a much stronger effect. Gibbs energies of gas depend strongly on the pressure. And you might expect them to because gases are far from incompressible. They're highly compressible. All right? So their molar volume is highly dependent on pressure. Consequently, their Gibbs energy is highly dependent on pressure. In fact, their Gibbs energy goes up with increasing pressure. Now, we can actually figure out what this is for ideal gases very readily. All right? We can just substitute for Vm from the ideal gas equation. Move the RT out front. All right? So that's going to move out front. We've got 1 over P. So we're just going to log P final over P initial. That is the equation that describes the change in the Gibbs energy for an ideal gas as a function of pressure. All right? So change the pressure. We have very simple equation. Probably should go on your equations page for quiz five. What is with this projector? What is with my laser pointer? Too cheap to buy new batteries for it. Please, please. OK. So this is what the volume is doing as a function of pressure for an ideal gas. It's following this purple line here. OK? And so if we want to evaluate this integral, we're going to be integrating from some initial pressure to some final pressure. This is the area underneath this curve. So this is the Gibbs energy. All right? And it's obvious that as we make PF higher and higher and higher, this integral is going to get bigger, right? And so it's obvious that the Gibbs energy is going to go up. Just based on that. Now we can define a standard molar Gibbs free energy. All right? We're constantly, there shouldn't be the word free in here. We're trying to get rid of the word free. Should be the standard molar Gibbs energy. All right? That's defined at a defined pressure, which is one bar. All right? That's how we define the standard Gibbs energy. Or the standard entropy. Or the standard anything. If it says standard, it's one bar. OK? And so in this case, we can write this expression. It just follows directly from this guy right here, except that we've now defined a particular Gibbs energy that applies when the pressure is one bar. OK? So this initial guy is now that guy. OK? So all this plot shows. Here's the molar Gibbs energy, and here's the pressure. And what we said earlier is that as I increase pf, I'm going to make this integral larger. And so that tells us the Gibbs energy's got to go up with increasing pressure. All right? What I just showed on this slide right here is that we're going to define a special Gibbs energy at one bar. And so that's what this becomes. This becomes one bar. Imagine this is one bar, and now we're integrating to higher pressure from that. So the same intuitive picture applies. If we move this to higher pressure, the integral is going to go up. And that's why this plot is going up, up, up, up. But you can see that it's got downward curvature. It's got downward curvature because here there's a big change in the Gibbs energy. Smaller, smaller, smaller. This has got upward curvature. So if we can integrate this guy, increases in pressure are going to have a progressively smaller and smaller effect on the Gibbs energy. And that's what we're seeing here. That's why there's downward curvature of this guy. All right? So the Gibbs energy goes down as we increase the pressure and up as we increase, sorry, goes down as we increase the temperature and up as we increase the pressure. Is all of this confusing? Yes. Absolutely. I mean, if you don't think so, you're just not paying attention. OK. Should we do some examples? The change in the Gibbs energy of 25 grams of methanol, mass density 0.791 grams per cubic centimeter, when the pressure is increased isothermally from 100 kilopascals to 100 megapascals, oh, I should say, calculate the change in the Gibbs energy. No verb. Calculate the Gibbs energy. When we subject this methanol to a change in pressure, that's a change by a factor of 1,000. It's a big change. But so your first thought process should be, it's a liquid, it's incompressible, let's use the simplest equation we can. And the simplest equation for any liquid or solid is just V. M times delta P. V. M times delta P. I immediately get the change in the Gibbs energy. This is what I did. And so if I'm willing to give up these molar, forget molar, I'm just going to calculate the difference. It doesn't ask me for the change in the molar Gibbs energy, it just asks me for the change in the Gibbs energy. So here I need, I'm going to calculate the change in the Gibbs energy. The volume of my 25 grams of methanol, there's the density and so I can calculate the volume. That's 10 to the 5, that's 10 to the 8, so there's my factor of 1,000 and so when I plug these numbers in I get 3.157 times 10 to the 9 cubic centimeters per cubic centimeter pascals. That's the delta G that I get. I don't like these units but I can convert them later. Now, oh, so I converted them right here. 3.157 times 10 to the 9 centimeters cubed pascals, no I don't like that and so I can use the definition for, here's the conversion factor for pascals to ATMs, there's the conversion factor from cubic centimeters to later, now I've got later atmospheres and so then I can use the ratio between the two different Rs to do the unit conversion to get joules. Tidious, but when I do that I get 3.157 times 10 to the 3 joules, or roughly 3 kilojoules. That's what the change in the Gibbs energy is going to be. Now, when I looked up the answer in the key it had something far fancier. It used the isothermal compressibility of methanol, which you can look up in the back of your book. That's the isothermal compressibility, it's 1.26 times 10 to the minus 9 per pascal, well for goodness sakes. We can go through and do it with the isothermal compressibility and see how different our answers will be. The isothermal compressibility is defined by equation 14.59, it's 1 over V times the derivative of volume with pressure constant temperature, and we can just linearize this differential, final minus initial, because we know the change in volume is going to be small. Pressure difference is not small actually, but we're going to linearize that also. So we're going to make two approximations, we're going to say the initial volume is just the final volume also, it's not going to change much. So that's going to become VI, so I know what that is. And the final pressure is much, much larger than the initial pressure, and so that difference right there I can just approximate as the final pressure. The initial pressure is just a rounding error on the final pressure, literally. So that's the equation I'm going to use. And so if I just do a little algebra, split this into two terms, now solve for VF, I get this expression right here, which can be further simplified to give me this expression right here, and then what I've got to do, sadly, is integrate that, so I'm going to plug that in for VM and do the integral. And so that's what I did here. I can calculate what this VI is, that's the mass, that's the density, and here are the two derivatives I'm going to do. I'm going to actually pull this out front, and then I'm going to take the derivative of dp, well that's pretty easy to do, and then kappa t times the derivative of p, and so I can run these two integrals. Now this is just going to be 10 to the 8 minus 10 to the 5. This guy is 1 half kappa tp squared, and when I plug the integration limits into these two antiderivatives, this guy ends up being almost 10 to the 8. That's the rounding error that gets subtracted from him, and these are the results for these other two terms. And so what I calculate is 2.96 times 10 to the 3 joules, which is also about 3 kilojoules, but if I do this carefully considering the isothermal compressibility of methanol, I get a slightly different answer. It's different by one part in 30 roughly. That's wrong, that's right, but pretty darn close to being right. So it depends on the precision that you need on the quiz. That might be good enough, unless this is also an answer. That would be cruel, wouldn't it? One part in 30 difference in the answer? No, I would never do that. Everyone see how to do that? Isothermal compressibility makes it a little bit more complicated. So the last thing to say today is that, and this may be completely obvious to everybody, is that if you want to know the standard Gibbs free energy for a reaction, like this reaction right here, what you do is you look up on a table, the Gibbs standard, that free shouldn't be there again, the Gibbs energy of this stuff, the Gibbs energy of this stuff, and the Gibbs energy of this stuff, and the standard energy difference is that minus that plus that. I can look this up in tables. Now, if I don't have a table of Gibbs energies, well I've only got a table of entropies and enthalpies, which is not terribly uncommon, you've got to use this equation right here. You can look up these enthalpies, you can look up these entropies, and if you know the temperature at which the reaction is happening, you can figure out whether the reaction is spontaneous at that temperature or not. You can look up the delta G. This is the delta G reaction. So in the case of H, we've got this delta sub R H, what that notation means is that R is the reaction, right? This is the enthalpy change for the reaction. This is the delta H formation. That's what we're going to look up in the table. What we want is we want to look up the delta H formation for all the reactants and all the products. Add up the reactant delta H formation, subtract, sorry, add up the product delta H formations, subtract the reactant delta H formations to get the delta H for the reaction. This zero means standard. What does that mean? Pressure equals one bar. Temperature equals 298.16 K. What is that new? It's just a stoichiometric coefficient in front of these. So that would be one, one, and one in this case. We need to do the same thing for the entropy. Here's the enthalpy, here's the entropy. We still get stoichiometric coefficients. We're going to look up these standard entropies, products minus reactants. So we can do that. Here's, I've done it by Golly. That's for this guy right here, which is what? Propionic acid. And these two guys are for those two in no particular order. Okay, so the delta H is just that, an evaluation of that, all of those numbers. Okay, and you can do the same thing for the entropy, from a table of standard entropies. You do exactly the same thing. And the delta G then is just the delta H minus T times the delta S, where we put plug in the T. All right, we get 72.6 kilojoules per mole at the delta G. We conclude that this reaction should be spontaneous at this temperature. Okay, I think that's all I got. 105. Are there questions about this stuff? I know it's not riveting. All right, we'll do, hopefully, it's going to get more interesting, I think.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 15. Thermodynamics and Chemical Dynamics -- Getting to Know The Gibbs Energy -- Instructor: Reginald Penner, Ph.D. Description: In Chemistry 131C, students will study how to calculate macroscopic chemical properties of systems. This course will build on the microscopic understanding (Chemical Physics) to reinforce and expand your understanding of the basic thermo-chemistry concepts from General Chemistry (Physical Chemistry.) We then go on to study how chemical reaction rates are measured and calculated from molecular properties. Topics covered include: Energy, entropy, and the thermodynamic potentials; Chemical equilibrium; and Chemical kinetics. Index of Topics: 0:02:42 Entropy in Isolated and Unisolated Systems 0:06:09 Enthalpy and Internal Energy for a Spontaneous Process 0:07:20 Helmholtz Energy 0:09:57 Parr Bomb 0:11:16 Gibbs Energy 0:24:40 Gibbs-Helmholtz Equation 0:30:19 Standard Molar Gibbs
10.5446/18945 (DOI)
Okay, how are you guys doing? How'd the exam go? How many people thought the exam was too easy? How many people thought the exam was too hard? So we're going to post the scores later on today. You don't want me to post the scores? John and G. and Matt, Mark, Mac, graded all of these exams by themselves. So I'm very grateful to them for doing that. And I don't honestly have any idea how you did right now. So I'll find out later today just like you. But don't worry, we'll deal with the outcome no matter what it is. I don't think it'll be that bad. Unfortunately, I did make one major mistake which was, you know, I'm supposed to put one of those exam return forms on the front of the exam and I always forget to do that for some reason on the first exam of the quarter. Quarter after quarter. So we're going to have to return these exams to you in discussion this week. And so in your discussion section, your TA is going to return your exam to you. Okay, you can look at it. Post the key later on today as well. So you can look at the key, compare that with how your exam was graded, talk to me. Okay, we do have a quiz Friday, quiz four. And it's going to be about chapter 15 which is all about the thermodynamic definition of entropy. We'll be talking about that today and Wednesday. Okay, and I'll tell you more about what's going to be on the quiz on Wednesday. Now, where are we? Well, we're right here. We've been going at this for four weeks now. We had your midterm on Friday. We're coming to the end of this topic of statistical mechanics and thermodynamics. We're starting chapter 15 today and we're going to just sort of zoom through 16 and 17. And at like 20,000 feet, we're going to look at these chapters. We don't have time to devote all the time that we ought to to these two chapters because we have to talk about kinetics and reaction dynamics. And so we're going to sort of skim these two chapters, 16 and 17. So believe it or not, we are near the end of where we're going to be talking about this topic right here. And I'm personally happy about that because I find this topic difficult myself. If you think it's difficult, I think it's difficult too. So we're going to transition to chemical kinetics probably next week. Probably maybe some place in the middle of next week. And we are pretty much on track, which is unusual for us. By now we've usually trashed the schedule and we're behind and we're starting to throw stuff out. But we're pretty much hanging in there and pretty happy with where we are. So just so you have an idea, you're going to need to read 16 and 17. You're going to have to have some knowledge of what's in those chapters, but probably not everything and I'll tell you what's important. So in chapter 14, chapter 14 was all about the first law, conservation of energy. Energy is conserved for an isolated system. Change in the internal energy is equal to zero for any process. The central concept is internal energy. But we're constantly talking about that in chapter 14 and in connection with the first law. And what we learn from the first law is to understand whether a transformation is allowed. A transformation that doesn't conserve energy is not allowed, in other words. And so we can tell immediately whether a particular process is even possible or not. But that doesn't tell us whether it will actually happen. It only tells us it's an allowed process. In the second law, the second law is the entropy of an isolated system increases in the course of spontaneous change. We find out whether the process that's allowed will also happen. Not all allowed processes will happen. We need to be able to sort allowed processes into processes that will not happen and ones that will. The central concept here is entropy, not energy. The answer to the question is a transformation spontaneous. Will it happen? If it's allowed, will it happen? Pretty important question to be able to answer in chemistry, I think you'll agree. Remember this? We talked about entropy weeks ago. We talked about statistical entropy. We put nickels in a Kenny shoe box, 100 of them. We shook the box and then we looked inside, made sure we had one layer of nickels. We saw this evolution in the distribution of heads and tails as we shook the box, shook the box, shook the box. We concluded that what was changing here is not the energy because the energy of this state is just virtually identical to the energy of this state. The energy doesn't really depend on whether the coins are face up or face down. Those are energetically degenerate states for the coin. Yet we see the system evolve consistently in this direction. What we concluded is that this is the direction of increasing W. The system is optimizing on increasing W. It wants a configuration that maximizes W. We concluded for any isolated assembly, we can always predict the direction of spontaneous change as that in which W increases and remember W is not energy. It's something else. Boltzmann postulated that this parameter is the entropy. He defined it according to this equation here. This is really a postulate. This is the statistical definition of entropy. This is the statistical definition. We're going to talk in a moment about the thermodynamic definition. Now we can already take the statistical definition and apply it to the expansion of a gas. We're going to try and tie together statistical entropy and thermodynamic entropy. What is the probability of finding gas molecule in the entire volume of a closed vessel? I've got one molecule in a vessel. The probability that it's in the vessel somewhere, if this vessel is airtight, that molecule can't get out, the probability is one. There's 100% chance that the molecule is in the entire vessel. I put it in there. It can't be anywhere else. Now what's the probability that it's in half of the vessel? We know intuitively the answers. One-half. How did we get that answer? Well, if we're talking about half the vessel, V over 2. Now, let's say there are two molecules. The probability that both are in V prime is the product of these two one molecule probabilities. In other words, we can take the probability for molecule one, multiply it by the probability for molecule two. That's the same as the probability for molecule one. We just multiply these together. That's the same as squaring that. For N molecules, this isn't going to be squared anymore. It's going to be this ratio to the Nth power, big N. We're just using the statistical definition of entropy to arrive at this conclusion. This is going to tell us something about volumes. Let's see if we can apply it. Here's an experiment. I've got gas A in this half of this vessel and gas B in this half of this vessel. A and B are located in the two-halves of a container. Now, I take away the barrier between the two-halves. What's going to happen? We know with 100% probability these two gases are going to mix. We don't have to wonder whether that's going to happen or not. We also understand that that's a manifestation of the entropy of the system increasing. That's a direct manifestation of the second law. Notice how I switched from blue to red to purple. That's beautiful. Show this process of all this increase in entropy. We know it does. Can we show that it does? Here's the Boltzmann law, S equals K log W. The change in the entropy is the final entropy of the system minus the initial entropy of the system because entropy is a state function. That's the entropy associated with the final volume. That's the entropy associated with the initial volume. We can just use Boltzmann's law and plug in Boltzmann's law to this equation right here. We've got the final number of states and the initial number of states. This is a minus. This is a subtraction. I can just write this log as Wf over Wi. That's the change in entropy right there. We have an equation that relates these number of states with this volume. We just showed that W sub n for V prime is given by this equation right here. We can just substitute this guy for that and a term just like him for that. Vf and Vi play the role of V prime. I'm going to make V prime Vf over V in the numerator and I'm going to make V prime Vi over V in the denominator. These two V's are just going to cancel for us. We're just going to end up with Vf over Vi and I can move that n out front. In the logarithm. The change in the entropy is going to be K and the number of molecules log Vf over Vi. I've got a term like that for delta S for gas A and delta S for gas B because there's two gases involved here. There's a Vf and a Vi for both gases. If I took only gas A and I allowed it to expand into a larger volume I think you'd agree it would do that spontaneously. If there was no gas B we could arrive at the same conclusion. There just would be no second term here. This would be 1, Nk log 2. Both of those are going to be spontaneous processes. A expanding into a vacuum if we take the barrier away and A and B spontaneously mixing if we take the barrier away. This is a positive number. We just need to know N to calculate it. We can calculate what this delta S is but it's positive that means this is a spontaneous process. Going all the way back to lecture 4 what if instead of a change in entropy we wish to calculate the absolute entropy of a monotomic gas. We actually talked about this briefly in lecture 4. We derived an equation for this purpose. We started off with this equation right here. We did a bunch of extra steps but I'm not going to talk to you about you can see it's lecture 4 for the rest of this derivation and we derived this thing called the Sacher-Tetrode equation. The thing is this is not a delta S now. It's an absolute entropy. Usually we're calculating a change in the entropy but here for a monotomic gas we've got an equation that tells us the absolute entropy. It's a pretty important equation. It can anchor our entropy calculation. This is just the residual end of this derivation. We can use the Sacher-Tetrode equation to calculate the standard molar entropy of something like neon gas. This is actually right out of lecture 4 as well. That's the thermal wavelength. What is this? Does this mean it's a meter? Yes, yes, yes, yes. Showed that that was true. Then we can plug everything in. That mass has to be in units of kilograms. Never forget. The mass of the neon molecule has to be in units of kilograms. And so that's neon has a mass of 20.18 grams per mole and that's 20.18 times 10 to the minus 3 kilograms per mole. Avogadro's number, blah, blah, blah. That's the mass of a single neon atom in units of kilograms. Yes, yes. We can calculate the thermal wavelength. It's always very, very short. We expect to get absurdly small number like this so never be surprised by that wavelength and then we can plug everything into the equation and s is equal to 138 joules per Kelvin per mole. Those are our units of molar entropy. Joules per Kelvin per mole. Okay, so this is just straight out of lecture 4. I'm only putting it in here to remind you that we can calculate the absolute molar entropy of a monoatomic, ideal monoatomic gas because that's something you might need to do hypothetically on a quiz. Okay, now, this is Sadie Carnot. He's the first French guy that we've talked about this quarter. And I could say something bad about the French, but I won't. He was interested in steam engines. He was a mechanical engineer. And he wanted to understand how to make them more efficient. As we go through and we look at these people who contributed to the thermodynamics and statistical mechanics, they all had a practical motivation for what they were doing. They weren't just interested in pure science. I mean, they were interested in pure science, but they needed to get an answer to make something happen better. Make better beer if you're a jewel. Make it more efficiently. Make a better steam engine if you're Carnot. He wanted to know two things. Was the energy available from a heat engine? Steam engine is a type of heat engine. Is that unbounded or are there some fundamental limits involved? How do I know when my steam engine is operating as efficiently as I could possibly expect it to? Or is there no limit to how efficient it could be? He was doing this stuff actually in the 1820s. And this is before Joule had figured out the first law. So conservation of energy wasn't a defined concept at this point in time. He was thinking about entropy before conservation of energy was even worked out. And the second thing is, can a steam engine be made more efficient by changing the working fluid using something besides water as the working fluid in the steam engine? He was thinking about that. So I told you, Carnot, he's French. How about Maxwell? Scotland. Yes, Maxwell's a Scott. Joule, you guys know that one. England, Gibbs. American, yes. We're in there. We're going to be saying more about him. Boltzmann. Austrin, yes. There they are. This would be a good quiz question. So there's a statistical definition of entropy. That's the Boltzmann equation. And there's a thermodynamic definition, which is this. Maybe we'll derive this equation on Wednesday. I don't know. It takes about six slides to do that. The change in the entropy is the change in the heat for a reversible process divided by the temperature. We're going to talk about what happens when the process is not reversible on Wednesday. But today we're always going to be talking about a reversible process. The change in the entropy is the change in the heat that flows at a defined temperature. That's what the entropy is. That's a thermodynamic definition. So what did Carnot do? He worked out how much work you can extract from a temperature gradient. A steam engine is a temperature gradient. You've got steam at maybe 150 degrees C. You've got the ambient temperature, which is roughly 20 degrees C. So you've got a huge temperature gradient, and you're extracting work from that. You're pulling the train with that temperature gradient for all practical purposes. And so he devised this thing called the Carnot Cycle. We talk about the Carnot Cycle because it's extremely important. It places an upper limit on the amount of work that we can extract from a temperature gradient. Alternatively, it tells us how much heat we can pump with a given amount of work. That's a very practical problem that we need to be able to solve. Carnot Cycle is the most efficient existing cycle capable of converting a given amount of thermal energy into work, or conversely, it's the largest temperature difference we can establish using a particular amount of work. It's both things. So if you've heard of a heat pump, a heat pump, its efficiency is bounded by the efficiency of a Carnot Cycle. We have to be able to understand this. Now, a heat engine extracts work from a temperature gradient. Here's a hot temperature. This would be like the steam in a steam engine. Here's a cold temperature. This would be like the ambient temperature outside the steam engine. This delta T is the temperature gradient that we're talking about, T H minus T C. Some of that can be extracted as work. Some of that temperature gradient can be extracted as work. What the Carnot Cycle tells us is how much. How much work is it possible to extract from this temperature gradient? So here it is. Here's a pressure volume trace. This is an isotherm at T1, and here's an isotherm at T2, in pressure volume space. We're starting here at this point in the cycle. So this is step one, step two, step three, and step four. I haven't labeled them that way. But we're starting here, so this is step one. Step one is an isothermal compression. Step two is an adiabatic compression. Step three is an isothermal expansion. Step four is an adiabatic expansion. Isothermal, obviously, it's on the isotherm. Adiabatic, obviously, it's not on the isotherm. So that helps you remember it's adiabatic. That's an isotherm also. So that's an isothermal, sorry, did I say expansion? Expansion, expansion, compression, compression. Isothermal adiabatic, isothermal adiabatic. So the first thing we have to know is what this is. Now there's an infinite number of these, but we're going to be able to derive some conclusions about them that's general and the same for all of them. I think you can, in principle, I could stop this here and then do an adiabatic expansion here, then move here, and then do an adiabatic compression here. I could concoct any number of these Carnot cycles. What do we know for sure? Well, step one, Q is greater than zero. Because I'm doing an expansion, it's an isothermal expansion, work is less than zero, that means Q's got to be greater than zero. This is an isothermal compression, work is going to be greater than zero for that, that means Q's got to be less than zero, and I'll show you why that's true in just a second. Q, of course, is zero for these two steps, because they're adiabatic. These are the things that we know for sure. So how efficient is a heat engine? Any heat engine, not just a Carnot cycle. The efficiency is defined as the work that's performed divided by the heat that's absorbed from the hot reservoir. In a heat engine, there's a hot part and a cold part. In your car, the hot part is inside the cylinder. The cold part is outside the engine. That's the delta T that matters in an internal combustion engine. In a steam engine, there's the temperature of the steam and there's the temperature outside the steam engine. What we care about is the heat that's transferred from the hot reservoir, the inside of the cylinder, the steam. This guy. So there's some numbers here and the units don't matter. It could be kilojoules. But if we wanted to calculate the efficiency of this particular heat engine right here, it's easy. I'm transferring five units from the hot reservoir. I'm extracting five units of work from the hot reservoir and dividing by 20. That's the total number of units that were transferred out of the hot reservoir. So that's the work right there. That 20 is the number coming out of the hot reservoir. The efficiency is 25%. Are you with me? Okay. Now, how efficient is a Carnot cycle? Well, it turns out that a Carnot cycle has an efficiency that's given by this equation right here, where that's the cold, temperature of the cold reservoir. That's the temperature of this guy and that's the temperature of the hot reservoir. That's the temperature of the hot reservoir. All right. The efficiency depends on what these two temperatures are and it can't be higher than this. So let's see if we can prove that this is true. Let's prove this because that's the central thing that Sadie Carnot was able to do. All right. We want to calculate the work for each of our four cycles. Okay. This is step one, two, three and four. All right. This is isothermal expansion. This is adiabatic expansion. This is isothermal compression. This is adiabatic compression. Everybody recognize those equations? Yes. And that's also, this is problem two on your exam. Now, that guy is the opposite of that guy. Right? Because I look at the limits of the integration. That's Tc to Th. That's Th to Tc. So that's the negative of that. So those two terms are going to cancel. Exactly. All right. What we're left with are these two guys. Okay. And to further simplify them, we have to notice that these two data points here lie on an adiabat. So that means that these temperatures and volumes are related to one another through this equation right here where gamma is just the ratio of the constant pressure and the constant volume heat capacities. These two guys right here, these two data points are also related through an analogous equation because they're also located on an adiabat. And so this equation holds true for any two temperature volume data points on any adiabat. Okay. So using these two equations, I can, this is just a statement to the one on the left. This is a statement to the one on the right. And now I can make substitutions. In fact, I can divide this guy by this guy. And I get this very simple expression that I can use to substitute now into this equation right here. I'm substituting that into this to obtain that. The total work is just minus NR, temperature difference multiplied by log V2 over V1. And the transfer of heat in the first compression is, so that's the total work for the whole cycle. Okay, and the transfer of heat in the first compression is this, that was just the first term that we wrote. And since we already agreed that the efficiency of the heat engine is just that divided by that, why, it turns out if you make that substitution and you cancel terms, you get this equation right here. Try it yourself. Okay, so a heat pump is used to maintain the temperature of a building at 18 degrees C when the outside temperature is at minus 5. For a frictionless heat pump, how much work must be expended to obtain a joule of heat? Is a joule a lot of heat or a little heat? Pretty big unit of heat. Pretty big heat unit. Answer, here's the efficiency. How much work must be expended to obtain a joule of heat? Here's our Carnot expression for the Carnot efficiency. So solving for work, work is just equal to QH times this guy. Right? And so a joule, so Q is a joule of heat. Alright, 1 minus these temperatures are of course converted to the Kelvin scale from those two guys. Alright, and what I calculate is 0.079 joules, pretty small number of joules. Alright, but we agreed joules are pretty big energy unit. Okay, and so the work necessary to pump one joule of heat from minus 5C to 18 degrees C, I think you'll agree that's going to be thermodynamically uphill to do that because that's colder than that, alright, that's going to be 79 millijoules. 1, 2, 3, 79 millijoules. Very simple. Okay, what is the entropy change for each of the four steps of the reversible Carnot cycle? There's another question concerning the Carnot cycle. We need the thermodynamic expression for S, yes. Alright, obviously if we can convert this differential into delta, then we just can pay attention to these delta, the change in entropy during step one and step two and step three and step four. The sum of those is going to be equal to the change in the entropy for the whole cycle, and that's what we want to know, alright, because S is a state function. So steps two and three are adiabatic, so Q is zero for those, and that means S is zero because S is Q over T. Alright, and so if Q is zero because that's an adiabatic step, step two is adiabatic and step four is adiabatic, those two change in entropies are also going to be zero. You can tell that right away. But that's not true for steps one and three. Steps one and three are going to be Q1 divided by Th. We can make these total derivatives because this process is occurring entirely at Th. It's occurring in an isothermal way, and process three is going to be Q2 over Tc because that process is occurring entirely on the cold adiabat, a cold isotherm. And so if we, we know what the pressure volume, Carnot cycle looks like, but if we rewrite the Carnot cycle in terms of temperature entropy, it's a box. Right, there's no change in temperature, but there's a change in entropy for process one. Process two occurs with no change in entropy, but a change in temperature between the hot and the cold. Right, you see what I'm saying? So the Carnot cycle represented as a temperature entropy process is a box. It tends to be a helpful thing, at least for me to remember. Helps me remember that the entropy is not changing in process two or in process four. All right, it's constant because there's no heat flow. Okay, so steps two and three are adiabatics, so Q1 goes here, that means this, so let me emphasize one thing. So these two guys, canceled, did I prove that? Did I leave a slide out? I don't think I proved that. All right, that guy and that guy are equal and opposite. What's that? Yes, but I don't think I adequately proved that that has to be the same distance as, I mean, in other words, the this, sorry, this has to be the same distance as this. I think I still need to prove that to you, I guess, on Wednesday. Now, everything that we've said has so far pertained to a reversible process, but real processes are not reversible. Reversible processes happen infinitely slow, real processes happen at some finite rate. So we have to address what that means for us in terms of the thermodynamics of real processes versus reversible processes. We're going to get to that on Wednesday. We're going to talk, first of all, about this equation right here, and then we're going to talk about what happens when the process is not reversible. OK, and I think actually this is all I've got, even though I've got 20 minutes left. Sean, somehow I managed to go through these 75 slides faster than I thought I would. Does anybody have questions? Yes? Yes? So steps two and four are adiabatic. That should be Q3, sorry. OK, so we'll see you on Wednesday. Thank you.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 12. Thermodynamics and Chemical Dynamics -- Entropy and The Second Law -- Instructor: Reginald Penner, Ph.D. Description: In Chemistry 131C, students will study how to calculate macroscopic chemical properties of systems. This course will build on the microscopic understanding (Chemical Physics) to reinforce and expand your understanding of the basic thermo-chemistry concepts from General Chemistry (Physical Chemistry.) We then go on to study how chemical reaction rates are measured and calculated from molecular properties. Topics covered include: Energy, entropy, and the thermodynamic potentials; Chemical equilibrium; and Chemical kinetics. Index of Topics 0:04:00 Energy Is Conserved for an Isolated System... 0:06:51 Entropy 0:15:53 Carnot Cycle 0:24:18 Efficiency 0:27:52 Data on an Adiabat 0:33:50 Temperature-Entropy Diagram 0:34:48 S is a State Function
10.5446/18944 (DOI)
So today we're going to be going over some midterm review questions. It's going to be a lot similar to how the discussion sections work. We're going to go over some questions that you guys have all seen before, and this is the review. So the midterm is going to cover chapters 13 and 14, and today we're going to go over a good portion of the chapter 13. And yeah, so let's get started. So there's going to be two problems on this exam. One of them is going to be covering the molecular statistical mechanics of a particular system, and the other one's going to be covering thermodynamics of gases, basically the Equal Partition Theorem. And there's going to be a 10-point extra credit problem, which it's extra credit, so we're not going to divulge the details to you. And so here's a summary of all the partition functions in translational, rotational, and vibrational states. Some things you want to keep in mind is that you want to remember that the mass that you include in the translational partition function is in kilograms. And also, for the most part, just remember to mind your units. If the units aren't matching up and cancelling out in the way that they should, go back and rerun through your answer because, well, the numbers are sometimes hard to get a hold of. The other thing you guys might want to remember and maybe go back and look at is the symmetry in lecture eight, where we go over how to calculate the symmetry for any particular system. And for the electronic partition function, there's really no formula. You just add up the degeneracies and the contributions from each state. And beta, again, remember, is in just 1 over kT. So anything that you see that's 1 over kT, you can just translate into beta. So for the first example, let's take a look at this problem. We've seen this before, I think, in discussion section one, where we talk about NL. And it's doubly degenerate electronic state at 121 wave numbers and a doubly degenerate ground state at zero wave function, or zero wave numbers. So the first thing, and this is obviously not going to be asked, well, it could be asked on the test, but we have to plot out the partition function as a function of temperature from zero to 1,000 k. The next part of the question asks the term populations. What is the population in the excited state and what is the population in the ground state? And the electronic contribution to the internal energy at 300 Kelvin. So right here, we can see the simple MO diagram depicting this system with the two degenerate ground states and the two degenerate excited states. And so the partition function for the levels, as seen here, is the degeneracy and the contribution from the energy. So as we can see in this partition function, we're going to have a contribution from the ground state, which is doubly degenerate. We'll set eta equals zero and the contribution from the excited state, where the energy is going to be equal to 121 wave numbers. So when we put those two together, this becomes a partition function. And so once we multiply it out, minding our own units, this is the resulting equation. As we plot this out as a function of temperature, we can see that at zero Kelvin, the only contribution that we're getting is from the ground state, as intuitively followed by knowing that the excited state is not going to be populated at zero Kelvin. And as we scale the temperature up to 1,000 Kelvin, we can see how the contribution from the second state contributes to the overall partition function. So in order to calculate the term populations, we will use this formula, which you guys have all seen, where we take into account the number of particles or number of molecules in the particular state in question, in this case state i, over all of the contributing partition function elements. So in this case, since we want to calculate the population of the ground state, if we go back and we look at the wave function, the contribution from the ground state, or not the wave function, the partition function, the contribution from the ground state is simply 2. So we leave that up here and we carry out the overall partition function that we derived earlier to get that at 300 Kelvin, the population of the ground state is about 64%. Now the way to get the excited state is we can simply subtract 1 from this quantity and get 36%. Or if you want to, kind of like as a challenge to yourself, carry out the same calculation with the contribution from just the excited state in Ni. And now we have to go over the electronic contribution of the molar energy. So which equations will we end up using? And this will be the equation sheet that you guys will see on Friday. And so the idea is that we throw a bunch of these on and if you studied well enough, you should know which one specifically to use. And this is the one we want to use. Notice how it looks like the average energy per molecule, except there's no brackets on the E and there is an N depicting the actual number per mole. So as we use this equation at 300 Kelvin, we'll denote that the partition function contribution at 300 Kelvin is 3.119. And then we plug in our partition function to find the derivative with respect to beta. And this turns out to be in the simplified version of this. So then once we mined all of our units and we use this equation, we will carry out the long unit kind of approximation there. And the end result is going to be 519 joules per mole. Do you guys have any questions so far? I'm kind of blasting through this pretty quickly. And remember, any questions that you guys have, feel free to ask them in discussion sections I've won today and to tomorrow. Even if you're not technically registered in them, feel free to attend. And so here's another midterm exam question from a couple of years ago. We have three vibrational modes at 680 wave numbers, 330 and 973. So the first question we are asked is, if the molecule is cooled before Kelvin, how much vibrational energy does it retain? And we'll get to the second two parts of the questions as we get to them. So how do we figure this out? Well, we sum over all the contributions of the vibrational wave function, vibrational modes, I'm sorry. And so we plug them in as wave numbers and our end result is, of course, going to be in wave numbers, which is going to equal 991 wave numbers. Or if you want to carry out the calculation in joules, you can convert all of these wave numbers to joules using HC. And then we'll get our answer in joules. So the end result in joules for this particular question is going to be 1.97 times 10 to the negative 20 joules. So for the second part of the equation, we're asked that if a leader of this system is warm up to 2,000 Kelvin, what fraction of these molecules is in the 600? Sorry. Oh, no, I'm on. I thought you raised your hand. Sorry. Yeah, what's up? What's the relevance of measuring the 4 degree Kelvin on the first part of the problem? This part? OK. Well, we're trying to explain that the system is basically at the ground state, there is no contribution of translational, rotational, or vibrational energy in the excited state. So yeah, this is where we were. So for what fraction of these molecules is the 680 wave number state vibration excited? So what part of the, how many of the molecules are in the excited 680 vibrational wave number mode? So which equations do we use for this? Now there's this one right here, which is the partition function contribution for vibrational partition function, and the calculation for which is blocked by that little thing up there. And this is the equation that we use from problem one, where we denote it to calculate the population of each excited state, or each state in question. Now if we use the vibrational partition function and plug it into the overall partition function in the denominator, do we use the vibrational partition function from simply 680 wave numbers, or do we use it from all three? Well the answer for this one specifically is we use the 680 wave number vibrational mode, because that's specifically what we're asked about. However, if it in question it's noted that 330 and the 973 wave number modes are equal to zero, then we include them. But I'll get to that in a second. It's actually down here. So in order to calculate this, we use the vibrational partition function, or yeah, we use the vibrational partition function for 680 wave numbers. It's just e to the negative, or e to the negative ei over kt, or beta e if you're used to that, over the vibrational partition function contribution from 680 wave numbers. So this is a simplified form of it, because if we take the denominator and carry out the simplification, this ends up up here, and this is the resulting part down there. Now if you're asked specifically where the vibrational contribution from 330 wave numbers and 973 wave numbers is equal to zero, then we include them in the denominator. So back to the question. So what fraction of these molecules is the 680 wave number vibrational mode excited? So we calculate our term population from using our well used by now, degeneracy times the contribution over the vibrational partition function. The end result after simplification looks like this. And so once we carry out our calculations, of course, minding our units, we will have negative.489 for the exponent up here. Once we carry that over and calculate it, we notice that the contribution is.237. Does this actually make sense? So if we carry out the calculation for temperature of the vibrational mode, we find that the vibrational temperature is at 978 Kelvin, which is well below 2,000 Kelvin. So we expect the vibrational partition function to be appreciable. It turns out that using this, the overall partition function is equal to 2.58, specifically in this example. Now hold on a second. OK. That covers the entirety of this question. I thought there was a part C. Do you guys have any questions on this problem? I'm kind of blasting through this. So we'll have some time at the end for questions and whatnot. So here's the equal partition equation that we've seen. Now if you guys remember the discussion section that we went over, I think it was in week three, we exhausted the kT over 2 contribution for any quadratic terms in the Hamiltonians for the translational, rotational, and vibrational energy. So if we were to think about this in the Hamiltonian operator, for instance in this case I think it's the translational energy, we have a quadratic term here, momentum squared over 2m, and the potential quadratic term half kx squared. So for any quadratic term, which can fit in the form of Ap squared or Bx squared, we will get a contribution of kT over 2. But for a mole of these guys, we use RT over 2, which you guys won't remember that R and K are just the same thing, but R refers to a mole of these things, while K refers to one specific molecule. So in order to get the heat constant, which is the amount of energy molecule can store per unit temperature, all we have to do is take the kT over 2, or specifically our entire contribution from kT over 2 from vibrational, rotational, and translational states, and derive it in terms of temperature. And we find that the heat capacity for one quadratic contribution is going to be equal to k over 2. And there it is. So if we take the classical Hamiltonian for 3D translation, where we have three quadratic terms for momentum in the x, the y, and the z, we'll notice that our equal partition contribution is going to be equal to 3kT over 2, which makes sense because there's three quadratic term. And so if we take into account our complete energy and derive the heat capacity for the energy, we'll see that it is equal to 3R over 2 for a mole of a system. So as we can see on the graph, this is just the translational energy contribution that we see. And we're going to go over the other contributions that can happen here, and here as well, when we include the rotational and the vibrational wave functions. So this also turns out to be the heat capacity for all monotomic gases, because if it's a single molecule, it can't, or not a single molecule, a single atom, it can't, rotation does nothing and vibration does nothing because it has nothing to vibrate off of. So for molecules with more than one atom, vibration and rotation can also contribute, but vibrational contribution doesn't turn on until the vibrational temperature approaches that of the temperature of the system in question. So for a linear molecule where the temperature is significantly less than the vibrational temperature, where we assume that the vibrational contribution is insignificant, but only the rotational is significant because of the temperature being higher than that of the vibrational temperature, sorry. We take into account for specifically a linear molecule two quadratic terms, one for rotation in the x-axis and one for rotation in the y, because rotation in the z-axis does not change the system. So since we have two terms, our contribution is 2KT over 2, or for all of these guys, it will be 2RT over 2. And again, the heat capacity for when we calculate it out is just R over 2 or K over 2. So once we have a linear molecule where the rotational states partake, we can approximate the heat capacity of the system for translational rotational states by 3R over 2 plus 2R over 2, which will be 5R over 2. So this is again denoting that there is no vibrational contribution to the heat capacity at these temperatures. So once we take into account rotation, we can see that the heat capacity contribution will be 5 over 2, as we have just shown. Now for a nonlinear molecule, which can exhibit three orders or three quadratic terms for the rotational contribution, all we have to do is simply add another KT over 2 to it to make it equal 3KT over 2. And similarly, again, for a mole of these guys, it's going to be 3RT over 2. And once we take the heat capacity of these guys together for a nonlinear molecule, our total contribution is going to be equal to 3R. Again this is not taking into account vibrational states, or yeah, vibrational states because the vibrational temperature is too low, or the temperature in the room is too low in comparison to the vibrational temperature. So this is the contribution from the translational, and this is the contribution from the rotational functions. So what about cases where the temperature in the room is high enough to have the vibrational states contribute? So we go back to the equation for the vibrational Hamiltonian, which only really contains two terms. Again, half KX and pi squared, or p squared, sorry, momentum, over the reduced mass. So following the rules of the equal partition theorem, we will get 2KT over 2, or RT per mode. But we have to take into account that any system can have for a linear molecule, for a diatomic, sorry, 3N minus 5 vibrational modes, or 3N minus 6 if it's not a diatomic. So this takes into account the diatomic vibrational contribution, which will equal out our contribution to be 5R over 2. And since it's a diatomic system, this contribution ends up equaling 6 minus 5, 1R. And so as we can see, all of these things put together, it's 5R plus 3N minus 5. So then we sum all those together, and we get 7R over 2. And we can see the full heat capacity contribution from translational, rotational, and vibrational states. So let's use this to solve for the constant volume molar heat capacity of, specifically, we're going to go over one of these I2, but using the same rules and principles in the equal partition theorem, we can solve for CH4 and C6H6. So the vibrational mode of iodine is 214 wave numbers. And will this vibration contribute to our constant volume heat capacity at 25 degrees Celsius? So let's calculate the vibrational temperature, where we have HV over K. We plug in our vibrational mode, mind our units, and we calculate that at 308 Kelvin, we expect this mode to be populated. So because 25 degrees Celsius, which is 298 Kelvin, is pretty close to this, we'll take into account the contributions from the vibrational partition function. So a quick approximation to determine whether vibrational modes will contribute is if we take this equation where the vibrational temperature, if it's less than or equal to two times the temperature in question, you want to include it. If you guys have played around with some of these functions, I've played around with them in my discussion sections where somebody has asked why or when do we take into account the vibrations. Play around with the math. I've noticed somewhat of a one-to-one relationship, meaning if the temperature is close to that of the quantity in wave numbers, irrespective of the units, it will contribute. However, if it's something about half, like so if you're talking about a system that's a 300 Kelvin, or sorry, 150 Kelvin and the vibrational mode is something like 300, it won't contribute very much, like something to a thousandth of a significant figure. But again, play around with these things, get comfortable with them, it's going to make it a lot easier for you. But this is a simple rule of thumb that makes it easier to follow. So yeah. So once we calculate our own inequality for this, we notice that the temperature is going to be much greater than the vibrational wave function, or vibrational partition function, vibrational temperature. So yes, we decide to include it. So we have two degrees of rotational, two rotational degrees of freedom because this is a diatomic species. And we include one vibrational mode because the only way this can vibrate since it's a diatomic is amongst itself. So once we calculate this out, taking these constraints into account, we get 5R over 2 plus R, which is 7R over 2. But the real value is it's pretty close, it's 3.4R. Note that if we leave out the vibrational mode, then our contribution becomes 2.5R, which is significantly less than the heat capacity if we don't include the vibrational mode. And my remote is having problems. I think this is the end. Do you guys have any questions? So I kind of blasted through this pretty quickly. Feel free to ask questions in any of the discussion sections on anything as far as stuff we've gone over. If you don't feel comfortable with the material, it's fine. Just ask us questions. We'll be able to help you. I have office hours on Friday, an hour before the exam at 10 o'clock at Natural Sciences Institute, room 1115.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 11. Thermodynamics and Chemical Dynamics -- Midterm I Review -- Instructor: Reginald Penner, Ph.D. Description: In Chemistry 131C, students will study how to calculate macroscopic chemical properties of systems. This course will build on the microscopic understanding (Chemical Physics) to reinforce and expand your understanding of the basic thermo-chemistry concepts from General Chemistry (Physical Chemistry.) We then go on to study how chemical reaction rates are measured and calculated from molecular properties. Topics covered include: Energy, entropy, and the thermodynamic potentials; Chemical equilibrium; and Chemical kinetics. Index of Topics: 0:00:52 Partition Functions 0:06:32 Vibrational Modes 0:12:06 The Equipartition Theorem 0:13:33 Heat Capacity 0:17:45 Classical Hamiltonian 0:18:12 Predictions of the Equipartition Theorem
10.5446/18941 (DOI)
So we're going to be talking about vibrational partition functions. But let me just say that you did well on quiz two again. Very happy with that. 69 of you got an A out of 107. It's outstanding. So there'll be another quiz on Friday. And then a week from Friday, we have midterm one. That's coming right up. So we'll have more to say about that. Okay. So we were talking about the symmetry number on Friday. This is a confusing topic. It isn't discussed very much in your book. All it is is it's the number of indistinguishable orientations of a molecule, period. The number of indistinguishable orientations of a molecule. So for simple molecules, you don't need any fancy thought process to discover what the symmetry number is. You can just take the molecule, label the atoms, turn it around, and figure out for yourself how many indistinguishable orientations there are. For example, SO3 is a molecule we talked about on Friday. Now I've taken it and I've labeled the oxygens so we can keep track of them. Let's figure out what the symmetry number is for SO3. I think everyone can see that if I take this guy right here and I rotate him by 120 degrees, the 120 degrees rotation rotates this guy down here, this guy goes over here, and this guy is going to end up over here, right? And so that's 120 degree rotation right there. I just took the molecule and I went er. And if I do it again, this 3 is going to rotate down here. Er. Okay. So there's 3 orientation shown on the screen. Is that it for this molecule? What if I take this guy and I flip him over? Like that. What did I do? I flipped him over so that the 2 is down here now. The 1 is up here but the 3 didn't move. All right, I took the guy and I flipped him over like that. All right, with these guys I took this guy, I rotated. I rotated, now I flip. All right, does this look exactly like this? It does. All right, the oxygens are all in the same place but this is not the same as this, this, or this. All right, I kept the labels on these oxygens. Does everyone see that? The reason I could do this is because this molecule is flat. I think you can see if I do this flipping thing with ammonia, I'm not going to get something that's indistinguishable because these 3 guys, in the case of ammonia, are hydrogens. They're either going to be sticking out of the screen or sticking in the screen and when I flip the molecule over, it's going to look totally different. All right, so the fact that I can flip this guy over and get an indistinguishable orientation is only because he's flat and then once I've got this, I can do the 120 degree rotation business again and so I end up with 1, 2, 3, 4, 5, 6 indistinguishable orientations for this molecule that are possible. I don't think there are more than that. All right, so you can just physically count them. The symmetry number is just the number of indistinguishable orientations, right? That's what it is. Now, I have a little recipe for finding this that I tried to tell you about on Friday and the reason I like this scheme is because it works when the molecule gets bigger and more complicated and you don't want to label every atom and turn it around and flip it over and count and count and count, right? This gives you a faster way to find the symmetry number. What we do is we identify an n-fold symmetric axis. What does that mean? Well, here's an axis for SL3 that's two-fold symmetric, right? I say it's two-fold symmetric because I can do 180 degree rotation of this guy and I get the same orientation. So it's a two-fold symmetry axis. Then you decide whether this axis contains a mirror plane, right? Does this axis contain a mirror plane? If I put a mirror right here, does this side of the molecule look just like this side of the molecule? No, right? So this axis does not contain a mirror plane. And then you count the number of these axes and you multiply by that number and then you multiply by two if the answer to two was yes. How many axes are there? One, two, three. Count the number of these axes. In this case there's three. Right? Multiply by the n from the n-fold and then you multiply by two if there was a mirror plane. Okay, so in this case we've got three axes times two-fold symmetric times one because we're not going to multiply by two. Two times three is six. Symmetry number is six. Everyone see that? Three steps. Identify a symmetry axis. Decide whether it contains a mirror plane. In this case it doesn't. I can't put this bar anywhere along this axis and get a mirror plane. Okay? If I could do that then I would be multiplying by two and the symmetry number would be 12. Okay, let's do this guy. Aluminum chloride. Once again, because the molecule is small, I can put a label on every single atom and I can turn it around and rotate it and I can figure out how many indistinguishable orientations there are. I can just do it myself without using any formulas. Here's the aluminum chloride. I labeled all the atoms. Now if I rotate this guy like this, okay, the one is going to rotate down. Here and the two is going to be up here. The four is going to rotate around to where the three is. The six is going to rotate up like that. So I think you can see if I rotate the molecule like this, I'm going to get this guy right here. Everyone see that? And now if I take that guy and I rotate him like that, what happens? The six is going to rotate down to where the one is, right? The five is going to rotate up to where the two is, all right? But the three and the four are not going to rotate. The three and the four are going to stay right, the three is going to stay in the front because I'm rotating around like this. I'm sort of rotating around that axis that goes right through the three and the four. Okay, so can everyone see that if I do that rotation, I'm going to end up with this guy. And he's different from this guy and different from this guy, but you can see they're all indistinguishable. If I didn't have these chlorines labeled, you couldn't tell the difference, could you? Okay, and then I can, with this guy right here, rotate him like that again, right? And if I do that, five rotates down, six rotates up, four rotates back, sorry, four rotates the front, three rotates the back, you get the idea. Yes? What about an axis that goes in between the three and four up and down? Yeah. And then it rotates. So when does it mean there's six? Four. Now, if I take this guy, I'm going to prove it to you in a second, what if I take this guy and I rotate him like I did that guy and two goes down here, one goes up there, right? If I rotate like that, the one is going to go up, two is going to go down. I get this guy right here if I do that. There are no other orientations. Okay? And if you don't believe me, mess around with it. All right, four possible orientations, the symmetry number for this molecule must be four, right? How do we find it with the shorthand method? Well, choose an axis. There's two possible symmetry axis that you can choose. In fact, there's three. We could use yours also. All right? Your axis goes up and down, right through the center of the molecule, right? Okay. The one I'm using goes right through these guys, these two guys right here, these two aluminums. Okay, but we could also use one that goes right through these chlorines right here. All right, there are in fact three orthogonal axes that we could use for this guy. All right, but let's just use this guy for the sake of argument, okay? Is there a symmetry plane along this axis? Now, of course, if I had some artistic talent, I could draw it properly, but is there a symmetry plane there or not? How many people think there's a symmetry plane? How many people don't see it? Okay, well, that's pretty good. Yet the answer is yes, there is, all right? And if I could draw this properly, you'd see that this side of the molecule looks exactly like this side of the molecule if I could just draw this properly. Okay, so there's one axis, there's two-fold symmetry, and the mirror plane says that that has to be a yes. The side where there contains a mirror plane, yes, and so we multiply by two. Two times two is four times one is four. Everyone see that? And if I had chosen this axis, it would still be yes to the mirror plane question. It would still be two, right? Because I'd be rotating, I'd have to rotate this only, it's only got two-fold symmetry there. All right, so it would still be one axis times two-fold symmetry times two, and if I chose this axis like this, it's still going to be one axis times two-fold symmetry times two. You should get the same answer every which way you look at it. Otherwise, if these answers are not self-consistent, there's a mistake somewhere. Yes? No, there's only one axis like this, there's two-aluminums, there's only one way to draw an axis through those two-aluminums, right? If I wanted to draw the axis through these two chlorines right here, there's only one way to do that, okay? If I want to draw the axis through the center of the molecule, up and down like this, there's really only one way to do that. All right, so there's really only one of each one of these. Yes? Yes? So if you have more than one mirror plane on the axis, it's still multiplied by two. There couldn't be more than one mirror plane. That's impossible. That's like some kind of fun house. Two mirror planes, no. You can have the most one, but you might have none. Let's do benzene. Okay, once again, we can do the labeling experiment. Call that one A, B, C, D, and so on. All right, I can rotate them down here. That's two orientations. You can't tell the difference. All right, if I go all the way around and I flip them over like a pancake and I do it again, I'm going to have 12 orientations. Does everyone see that? All right, if I just rotate them by 60 degrees, boom, boom, boom, boom, I'm going to get six, and then I flip them over like a pancake and I do it again. I'm going to get 12, right? All right, how does the Penner method give this? All right, there's one, two, three equivalent axes that I can draw through the carbons. Everyone see that? One, two, three. Does this axis contain a mirror plane? Right there. Yes. Okay, so the calculation is three axes times two-fold symmetric times two, two times two is four times three is 12. Boom. Everyone see that? I could also draw the axis right through the center of the molecule coming out of the screen. I could choose that as an axis too. All right, how many of those axes are there? One, is there a mirror plane? Yes, it's right in the plane of the screen. It's right there. It goes right through, cuts the molecule right in half. Okay, there is a mirror plane, and so if I do that calculation, one axis times six-fold times two is 12. You get the same answer, right? It's six-fold symmetric now. One, two, three, four, five, six. For that axis, it's six-fold symmetry, not two-fold. Everyone see that? Okay, we're not going to beat this to death anymore, but you should be able to look at a molecule and figure out what its symmetry number is. Any which way you want. Label the atoms, rotate it around in your head, cut out the model. Okay, on the exam, it will be easier if you can do it my way. Okay, now, estimate the rotational partition function for HCl at 25 degrees and 250 degrees. See, it's B value is 10.59 wave numbers, rotational constant. Okay, actually pretty big, rotational constant, pretty big energy for a rotational constant. Okay, here's our choices. Here's the equations that we flashed on the screen on Friday. We're going to use this guy. I hope you've noticed that this guy and this guy are basically the same equation because if the linear molecule lacks a center of symmetry, sigma equals one. Okay, so these are just the same equation for goodness sakes. Okay, but in this case, the molecule lacks a center of symmetry. So we're going to use this guy. That's the rotational temperature, theta sub R. The rotational temperature is just B over K. B and units of joules, of course, we're going to use this equation. Okay, and so the rotational temperature here, I can calculate what it is, 10.59, convert to joules, divide by K, I get 15.23 degrees Kelvin. That's the rotational temperature. Pretty cold, huh? Actually, it's usually colder than this, all right, but that's pretty cold, 15 degrees Kelvin. Okay, so this temperature is much lower than either one of these target temperatures. That's 25 degrees C, not K. That's 250 degrees C. Okay, so we expect to have lots of thermally accessible rotational states, don't we? All right, we're at temperatures that are way higher than the rotational temperature of this molecule. All right, 15 degrees. Okay, so let's calculate what the partition function is. This is our equation, it's just KT over B, so there's KT, this is just unit conversion for B, all right, 10.59, convert it to joules, 10.59, convert it to joules. Only thing that's different is the temperature. That's the partition function I get. Do those numbers look right? What do you think? They look about right? I mean, we expect something significantly bigger than one, don't we? We're at temperatures at 25 degrees C, we're at 298 K. The rotational temperature is 15. All right, so lots of rotational states are going to be occupied. 15 degrees is when we start to have multiple rotational states occupied. All right, we're way above that. We're going to have lots of rotational states occupied qualitatively, that looks about right. Notice that 250 is not 10 times 25 in Kelvin units. All right, it's about a factor of 2. All right, and that's also what we see here. So we probably didn't make a math mistake here. Okay, that was easy. What about methane? Let's calculate its partition function at 298 degrees K. It's rotational constant 5.2412 wave numbers. These rotational constants are often known to 4, 5, 6 sig figs, right? Because in spectroscopy you can measure the frequency very, very, very accurately, right? It's not too often. In physical chemistry we can measure something with this kind of precision, all right? But with spectroscopy you can measure rotational constants, right, with very high precision. Five sig figs, right? We know it to within one part in 10,000. We need the big boy for this, all right? He's a nonlinear molecule, methane. Okay, the moments of inertia though are all the same because of the symmetry of the molecule. So it's just B cubed. Okay, we need the symmetry number. What is it going to be for methane? Think back, four vertices, what's the symmetry around each vertex? Three, did someone say three? Four vertices, each of them is three-fold symmetric. Is there a mirror plane along those axes? No, so what's the symmetry number? Right, 12. Okay, so we plug everything in. This is one over 12, blah, blah, blah, blah, blah, unit conversion, blah, blah, blah. 36.711 is the partition function that I calculated. Does that seem about right? Yeah, it's higher than what we saw for HCl. All right, but B is a lot smaller. All right, B is half as big as it was for HCl, so at this temperature we expect Q to be larger. It's about twice as large. All right, but hey, B is half as big. So it makes perfect sense. Okay, these rotational partition functions are pretty easy. We've only got these three equations, and in fact it's really only two equations. Okay, now, vibration. Yes? That is the rotational constant about each of the three principle axes of the molecule, X, Y, and Z. And so in the case of methane, because it's four hydrogens and it's tetrahedrally symmetric, those three rotational constants are all going to be the same. Now, if it was chloromethane, if you had one chlorine and three hydrogens, then there would be one unique rotational axis that had a different B value. So it would be AB squared, okay? And you'd have to be told what they are. You'd have to be told. That's why we need that big equation. Okay, so harmonic oscillator. Remember way back when we were talking about it. Here's the harmonic potential. One half K X squared or R minus RE, R is the equilibrium bond distance. Everyone remember Hooke's law? Evenly spaced energy levels. This is what they are. New plus one half times H new or V rather plus one half times H new. That's the vibrational quantum number. Okay, we have our normal expression for the partition function. We just plug this energy in and we've got the partition function. It always works this way. And then all we do is we simplify this. But we always start with this. This is always the expression for the partition function no matter whether we're talking about translation, rotation, vibration. You can derive the partition function by just plugging the energy into this expression and simplifying it. Always. Now, to make this a little bit simpler, we often neglect the zero point energy. Later on, if we calculate the energy using the equations that I'm going to show you, we can always add in that zero point energy at the end. We know what it is. We can put it back in if we want to make sure that we don't neglect it. Okay? So if we neglect the zero point energy now, the one half goes away. We get this guy right here. So if I write that series out, it looks like this. V equals zero, one, two, three, and so on. Okay? Now, there's a nice closed form expression for what this series sums to. All right? It's going to sum to this if that's an x. Of course, it's not an x. It's an h nu over kt. Okay? So we're going to use the same form. We're just going to substitute an h nu over kt. And that's our vibrational partition function right there. Really simple expression. How big is it? Well, let's just say a typical vibration is 2,000 wave numbers. Now, that's actually a pretty energetic vibration. All right? But 2,000 wave numbers. I think you'll agree. All right? That's not as energetic as a proton vibrating. It's got a lower energy than that. Okay? And so that's what the frequency turns out to be. And so if I plug in that number and I calculate what q is, I get almost exactly one for the partition function. All right? At room temperature now, the partition function is 1.0 0 0 0 6 4. Okay? So this is totally different than what we saw with translation and vibration, translation and rotation. With translation, we had millions of accessible states, didn't we? We had a partition function that in one dimension, if we had a one micron box, was 67,000. Remember that? Make it a three-dimensional box. You've got millions of accessible states. With rotation, we get 20, 30, 40 rotational states that are accessible at room temperature. With vibration, we get one. All right? What does that number mean? It means that molecules are almost always in their ground vibrational level at room temperature. All right? It is unusual for the vibrationally excited states of a molecule to be excited at room temperature. All right? That would only happen if you had really heavy atoms like I2, right? Iodine has a vibrational energy of 214 wave numbers. How much thermal energy is there at room temperature? 200. Okay? So Iodine is excited. All right? But my goodness, I mean, that's 226 grams per mole atoms, bowling balls, all right? They're barely moving, all right? Very low energy. It doesn't take much energy to excite that vibration. At room temperature by golly, it is excited, all right? But it's the exception. Okay? So at room temperature, just one vibrational state is thermally accessible in general. There's exceptions, but in general, right? Very different from translational rotation. Now, let's do a calculation. The triatomic molecule chlorine dioxide has three vibrational modes at frequencies of 450, 945, and 1100 wave numbers. What's the partition function at 298 degrees Kelvin? Okay? How much thermal energy is there at room temperature? 200 wave numbers. What is the lowest energy vibration here? 450. Okay? So we want to keep that in mind because what do we expect the partition function to be here? Just looking at this problem before we calculate anything. We're expecting partition number, partition function of 10, 100, how about 1? Should be close to 1. 1 and change, right? 1 and something, right? 1.0 something maybe. Right? Is that what our intuition is telling us? Yeah, because the thermal energy is way lower than that. Okay, let's do it. There's our molecule. How many vibrational modes does it have? Three. All right, but it's a good question to ask. Did I say anything about the degeneracy of these three states here? I didn't say anything about it, but we know that it's a nonlinear molecule, so that's 3n minus 6, and it's 3. 3 times 3 is 9 minus 3. So there's only three vibrational states in this molecule. If I give you three energies, you know they're all non-degenerate. Right? Okay, so 3n minus 6 is 3. And so the partition function is just going to be that times that times that. What's the partition function? What's that guy? All right, I just take my expression for the partition function. I plug in my beta, my H, my C, all my constants. That's the temperature that we're talking about to get 1.12. Actually a little higher than we were expecting. We expected 1 and change, but 1.0 something. All right, it's a little higher than we expected. Okay, and if I do the same calculation for 945 and 1100, I should get smaller numbers, yes. Smaller than that. Okay, and so the overall partition function is that times that times that or that. All right, now do these numbers all make sense? Well, you know, first of all, they're all 1 and change. Secondly, that one's bigger than that, which is bigger than that. All right, if these aren't in the right order, that's a dead giveaway that we've made some kind of mistake. Okay, and so qualitatively, yes, we can live with that number. It seems about right. It's a little higher than what we were expecting maybe, but just barely. Okay, now if we make it really hot, this is 200,000 degrees. This is 100,000 degrees. All right, then you can in principle have lots of vibrational accessible states and they increase quasi-linearly with temperature. Of course, the molecule falls apart way before you can do this. All right, there's no such thing as a bond association energy that is large enough to allow you to see this, but if you could, if the molecule didn't fall apart at these temperatures, this is what you would see. All right, this is for a 1,000 wave number mode. This is for a 100 wave number mode. The vibrational temperatures are 144K and 1400 degrees K. Okay. We can always calculate the vibrational temperatures just H nu over K. Yes, it would fall apart. That's only about one electron volt for most molecules. Okay, so 298 degrees Kelvin is a high temperature for translation, a high temperature for rotation, but a low temperature for vibration. We want to keep that in mind. Right, lots of translational states, lots of rotational states, but in general very few vibrational states are thermally accessible at temperatures near room temperature. What about the energy? Well, we've got this nice expression that we derive chemistry's least intuitive equation. All right, all we have to do is plug in our partition function here and here. We can calculate the energy directly from that. Okay, and so you can do that, simplify it. There's your equation, 13.39. That's the energy for N molecules. All right, that's the vibrational quantum. This equation emits the zero point energy, so we can add it in, all right, we can calculate the energy using this equation and then add in H nu over 2 to make that zero point correction if we're worried about it. If there are multiple states, we have a term like this for every state. In other words, more than one H nu, energy is additive. Make sense. At temperatures that are so high, blah, blah, blah, blah, blah, blah, we can, if the temperature is much higher than the characteristic vibrational temperature, we can approximate the denominator as the first two terms that find infinite series. The infinite series looks like this. Take the first two terms, 1 minus 1 is zero. H nu cancels with H nu, so we just end up with big N over beta. N is the number of molecules, so the total energy is just N kT, and the reason that's interesting is because that's what we were calculating when we did the equi-partition theorem. We wrote the classical Hamiltonian for vibration. It had two terms, and so it's 2 times kT over 2 is just kT per mode, remember this? Okay, so the equi- the equi-partition theorem tells us that we've got RT in terms of energy per mode, or R versus in terms of the heat capacity. So we've got 3R over 2, or 5R over 2, 7R over 2, remember this? Yeah, well it turns out this was right. It's the high temperature limit. If you take the partition function and you look at the high temperature limit, you get the equi-partition prediction, right, for the contribution to the heat capacity and the contribution to the internal energy. Okay, here's a midterm exam question from a couple of years ago. You're going to have one like this on yours. On this exam, almost everyone got A wrong, but A is easy. What is the answer to A? A dichlorine oxide molecules cooled to 4 degrees Kelvin. How much vibrational energy does it retain at that frigid temperature? Zero. That's what they said. How much energy does that molecule retain at 4 degrees Kelvin? How much vibrational energy does it retain? We're physical chemists. Yes, one half H nu. How many H nu's are there? One, two, three. This molecule has three vibrational modes. You can't suck the zero point energy out of any one of them, even at 4 degrees Kelvin. The molecule is going to keep bending a little bit. It's going to keep asymmetrically vibrating and it's going to keep symmetrically vibrating even at 4 degrees Kelvin. Right? Okay, so the answer is zero point energy at 680, zero point energy at 330, and zero point energy at 973. Calculate that. You get 991 wave numbers. It contains 991 wave numbers of energy even at 1 milli Kelvin. You can't remove that energy from the molecule. Quantum mechanics says you can't do it. And you guys have had 20 weeks of quantum mechanics. Does everyone see what I'm talking about here? You can't remove the zero point energy from any mode. That's true. You had 20 weeks, not with the pen or the... Stop. Okay. I convert that to joules. Now, one mole of Cl2O is warmed in a one liter container to 2,000 degrees Kelvin. What fraction of these molecules is the 680 wave number in vibrational mode excited? All right? You're going to have something very similar to this to answer. Here's what the equations page of that exam looks like. Okay? Equations are intentionally disorganized on this page. The purpose being that you have to fish out the right one and it will not be obvious which one to use. So if you don't know what you're doing, you could be in trouble. You need this guy and this guy. That's the partition function we just derived, right? For vibration. All right? And that is the normal equation. That's the Boltzmann distribution law. All right? In the denominator, the Boltzmann distribution law is the partition function we just derived this form of the partition function for that equation. So we're just going to plug that in for that right there. Okay? And what do we want? We want n little n over big n, don't we? We want little n over big n. We want to know what fraction of the molecules. Okay? Now, here's the confusing part. Am I going to use in this partition function the overall partition function for the molecule? In other words, Q, Q, Q multiplied together? Or am I going to just use Q680? Because I'm only asked about the 680 wave number mode. I'm asked is the 680 wave number mode excited? All right? You see that conundrum? Can everyone see that the difficulty in the calculation here is what are you going to use for this Q down here? Are you going to use the overall partition function for the molecule which has three terms in it? Or are you just going to use the Q680? All right? You've got to decide. Here's how you make the decision. All right? If you are asked to calculate the fraction of the molecules for which the 680 mode is excited, it's in its first excited vibrational level and the other two modes are anything, then you only use Q680 down here. In other words, if the identity of these other two modes isn't specified, they can be anything, all we want to know is the 680 excited or not, then you just put Q680 down here. But if, on the other hand, you're asked, let's say that you're asked, I want to see if the 680 mode is excited and these other two are not. I want to know what fraction of the molecules have 680 excited and the other two modes, unexcited. So you've specified what the other two modes are doing. Hey, that's a different story. You've got to include all three. You with me? So if you specify all three, you need all three in the denominator. If you specify one, I only want to know what one is doing. I don't care about the other two. You only use the one that you care about. I don't think your textbook does a good job of explaining this fine point. So for one moment, I want to know what is the value of the, it's my most precise. OK, so this is our equation right here. All right, plug in the numbers. This is what the denominator is equal to right here. I just calculated that's what this is right here. So I'm just putting the exponent rather. It's minus 0.4890086. And so when I plug that in here and here, I get 23.7% of the molecules will have the first excited state excited. At 2,000 degrees Kelvin. Does anyone have any intuition about whether this number makes sense? Because I don't. 0.24% 2,000 degrees Kelvin. In order to figure it out, we need to know what the characteristic vibrational temperature is at 680. So we can calculate that 978. Now we compare that with 2,000. Is 2,000 higher than that? Yes. So we expect there to be an appreciable population in this vibrational state. We're half of the way. At 2,000 degrees, we've put more energy into the molecule than necessary to populate this state. So we expect this state to have appreciable population. Everyone see what I'm talking about? 2,000 degrees, 2,000. That is how much thermal energy it would take to start to populate this vibrational level. At that temperature, we would expect the vibrational level to just start getting populated. And we're at 2,000 degrees. So we expect there to be an appreciable population. 24% could be. Doesn't sound unrealistic. Everyone see what I'm talking about? Now, what if we did? Everybody remember, by the way, what these numbers up here, these superscripts in front of the element. Everybody remember what those are? I didn't use them here, but now I'm using them. What are those? What is that 35 right there? Yeah, it's the atomic mass. I'm talking about an isotopically pure sample of chlorine oxide. The 16 is the mass of the oxygen in atomic mass units. The 35 is the mass of the chlorine. How do I convert that 35 into the actual mass per chlorine atom? How do I do that? What if I want to know the mass of a single isotopically pure chlorine 35? What do I do? What are the units of that 35? Grams per mole. So if I want to know how much one chlorine 35 weighs, I just take 35, divide by Avogadro's number, right? Yes. That's what you do. OK, it's got three vibrational modes, boom, boom, boom, all down to you. So what fraction of these molecules have the 680 excited, the 330 excited, but not the 973? Now we're going to specify all three. I want one quantum of energy in that guy, one quantum of energy in that guy, and I want that guy to be in his ground state. I could ask that question. If I ask the question that way, I'm asking for all three, so you've got to use the overall vibrational partition function down here. See that? And then that energy right there is the total energy, two tricky things. You're going to use the overall vibrational partition function because I'm telling you I want all three of these things to be a certain way, any which way, any way I ask you for them to be. If that was a 2 and that was a 5, it would still be the same. I'm still telling you how I want those three states to be configured. And then that energy is the total vibrational excitation energy in the molecule, the total. So I said 1, 330 and 1, 680 add them together. If I said 2, it would be 2, 680s and 2, 330s. The energy here is the total vibrational energy for all of the different excitations that you want to consider. You see that? I think this is rather confusing. I'm sorry? Degeneracy. Just to be considered, 1, or 1, is degeneracy. What if there was degeneracy? So you said 1, right, D, but 680 was 330. So just to be, I think, just 1, degeneracy. Yeah. So if 680 was doubly degenerate, there'd be two of these terms. Two of the singly degenerate terms multiplied together for 680. OK. And we did the calculation. You get a smaller number, blah, blah, blah, blah, blah. Yes, it makes sense because it's smaller than it was before. It should be a lot smaller because now we're talking about a tiny subset. First of all, there's two things excited. The other thing, we're insisting that that be unexcited. So there's a tiny fraction of molecules are going to meet these specifications. OK.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 07. Thermodynamics and Chemical Dynamics -- Vibrational Partition Functions -- Instructor: Reginald Penner, Ph.D. In Chemistry 131C, students will study how to calculate macroscopic chemical properties of systems. This course will build on the microscopic understanding (Chemical Physics) to reinforce and expand your understanding of the basic thermo-chemistry concepts from General Chemistry (Physical Chemistry.) We then go on to study how chemical reaction rates are measured and calculated from molecular properties. Topics covered include: Energy, entropy, and the thermodynamic potentials; Chemical equilibrium; and Chemical kinetics. Index of Topics: 0:00:41 The Symmetry Number 0:07:09 Aluminum Chloride Atoms 0:13:03 Example: Benzene 0:15:43 Rotational Partition Function of HCl 0:19:12 Rotational Partition Function of Methane 0:22:02 Vibrational States 0:33:24 What About Vibrational Energy? 0:36:13 Vibrational Modes
10.5446/18939 (DOI)
Okay, so we have a quiz on Friday, as you know. And it's just going to be exactly the way it was last Friday. The scantrons are going to be up there at the front. Grab one when you come in. Don't sit next to anybody. We had plenty of room last week, right? For some reason, they gave us this giant lecture hall, which is really nice. I haven't written it yet, but I'll write it this afternoon. When I do that, I'm going to look at what I've been telling you for the last three lectures. I'm going to look at the discussion guide 2, which is posted now on the lectures page of our website. I'm going to look at the assigned homework, and then I'm going to brew up something with five questions on it. The first two questions are going to be pretty easy. The last three will be a little bit harder. Okay? Just like it was last week, it's open book, open notes, open anything except we're not going to allow tablet computers and regular computers this time. Okay? You can still bring your tablet computer and your laptop if you want, but, and you can use it after the quiz is over, but you can't use it during the quiz. Use any kind of calculator that you want. Okay? And please, do not use this as a reason to print out the lecture notes, because that's very destructive. All right? Write anything down you want, any formulas, put posted notes in your textbook. Okay? Any questions on quiz 2? Even though I try to include all this other stuff, I tend to mainly look at what's in the lectures. I try to make sure that I haven't asked you any questions on the quiz that I haven't answered in the lecture. That's what I tend to do. You might also want to look at quiz 2 from last year, which I posted on the announcements page, because I'm lazy and I'm likely to steal problems off quiz 2. It's just the way I am. Okay? So today we're going to talk about a subject that's hardly discussed at all in your book, but I think it's really cool and it really helps us obtain an intuitive understanding of how this statistical mechanic stuff works. All right? In real molecules, the picture is considerably more complicated than this. This is where we started with the statistical mechanics and evenly spaced ladder of states such as that, which we would obtain if we had a harmonic oscillator, but real molecules don't even vibrate this way, even though we use the harmonic oscillator to approximate vibrations in real molecules. There's anharmonicity in real molecules. All right? Translation comes pretty close to the harmonic oscillator picture, but even in this case, the states are not evenly spaced, even though I've drawn them evenly spaced here. In reality, they're not. I'll show you that later on. And then these translational states are really close together. There's many of them between each rotational state of the molecule. Here's the rotational states. And there's many of these translational states that are sandwiched in between each one of these rotational states, thousands in most cases, hundreds of thousands. And then these rotational states, there are many of them in between each one of these vibrational states, aren't there? So there's a lot of complexity in real molecules. Now, we really haven't started to talk about it. It's been discussed in your book, and you've been doing some homework problems that relate to this issue, but we really haven't talked about it in class. We're going to be lucky in that we can pick this problem apart. We can treat each of these energetic manifolds. Each one of these things is what I'm calling a manifold. We can treat them separately, and at the end of the day, we can just take the partition function for translation and multiply it by the partition function for rotation, and we can calculate each one of these partition functions separately, multiplying together to get the total partition function for the whole molecule. And the reason this works is because these degrees of freedom are so-called weekly coupled. They don't talk to each other. So we multiply them together, we can calculate them separately. We're going to get the total partition function that we're looking for for every different kind of molecule that we care about. Now, we're going to start talking about that later on today, but in this lecture, we're going to talk about a shortcut to calculating not this guy, but something related to him, the heat capacity and the internal energy, a shortcut, and it's called the equi-partition theorem. I think this is discussed right at the end of Chapter 13, but it's discussed very briefly, not in enough detail to really understand. And then at the end of this lecture, we'll start to talk about the translational partition function. Okay, so when your book talks about the equi-partition theorem, it concentrates on the internal energy of a particular molecule or the internal energy of a mole of molecules. These two variables are difficult to measure directly in the laboratory. In other words, if you're an experimental physical chemist and you go into the laboratory to actually measure the absolute value of the internal energy of a single molecule or a mole of molecules or any number of molecules, that turns out to be a hard thing to do. But it's a lot easier to measure the capacity of some volume of molecules to absorb heat, the heat capacity. That's an easier quantity to measure. You may recall the constant volume heat capacity is just the amount of energy a material compound can store per unit temperature, so the constant volume heat capacity is just the partial derivative of the total energy of the system with temperature at constant volume. We can equally well define it in terms of the average internal energy of a particular molecule. Okay, so hypothetically, if the average internal energy of a particular molecule turned out to be kT over 2, then the heat capacity is just the derivative of kT over 2 with respect to T, so it's just k over 2. So if we know the internal energy, we can get the heat capacity and vice versa. Now, I haven't told you why this is cool yet, but bear with me. This is a plot of the heat capacity as a function of temperature for some generic molecule. What we want to appreciate is that as the temperature increases along this axis from left to right, the capacity of a molecule to store energy increases in a stepwise fashion like this. Why is that? The answer is that at really low temperatures. Only the translational states that are available to a molecule are occupied. In other words, the only way a molecule can store energy is by changing its velocity. It can store more energy by speeding up. Once it gets to a temperature where the thermal energy available at that temperature equals the energy between rotational states of the molecule, now the molecule can start to rotate as well as translate. So it's got a new manifold that it can access for storing energy. It can store energy as translation, but also it can store energy in terms of its rotational states because now it's reached the threshold here where the thermal energy available to it is high enough so that it can start to access. Now remember, these rotational states are much further apart in molecules than the translational states, much further apart. Finally, if the temperature is even higher, you get to the point where you're exciting many rotations, but suddenly you start to have just the threshold of energy you need to excite some vibrations of the molecule. Now you access the vibrational manifold, and you've got three ways to store energy, translation, rotation, and now vibration. Okay, so let me point out a couple of things about this diagram first. So it's not the slightest bit confusing but the units for this constant volume, volume, heat capacity are either K, in other words, 7 halves K, 5 halves K, 3 halves K. If we're talking about a single molecule, the units are in terms of K. If we're talking about a mole of molecules, they're in units of R because one mole, Avogadro's number times K is R, isn't it? Okay, so even though it doesn't say what the units are here, that's 3 halves R if we're talking about per mole. These guys, these temperatures here are the characteristic temperatures in your book called Thetis of V and Thetis of R. This is the characteristic rotational temperature and the characteristic vibrational temperature. The rotational temperature is just B written out in terms of joules, right? B, remember, is the rotational constant for the molecule? Write it out in terms of joules or write it in wave numbers and do a conversion to joules. That's what this is, always confusing to me. Divide by K and you get units of temperature, right? Because K is joules per Kelvin. If that's in joules, I'm going to get this in terms of Kelvin. That's what that is right there. Likewise, that's the energy between vibrational states, H nu, H nu, H nu, H nu, H nu, H nu. If I write that in terms of joules and divide by K, I get that temperature right there. So that's the characteristic vibrational and the characteristic rotational temperature. We can calculate for these two temperatures for any molecule as long as we know B and H nu. So down here, we've got only translation going on. Up here, we've got translation and rotation because the thermal energy is high enough now so that the molecule can rotate as well as translate and finally, up here, we've got all three things going on. The capacity of the molecule to store heat increases as it has more channels in which to put the heat. Very intuitive idea, I think. So what the equi-partition does is it provides a shortcut method for estimating approximately now the internal energy and heat capacity of any molecule. What makes it interesting is it actually works. It's so simple and it actually works. It gives the right answer approximately. How does it work? Consider the classical Hamiltonian for 1D harmonic oscillator. Here's the way it works. You write the classical Hamiltonian for the molecule. The classical Hamiltonian. So if there's quantum mechanical stuff going on, we're going to miss it here. Then, we convert each one of these quadratic terms. Well, let me show you. Consider the classical Hamiltonian. So for a 1D harmonic oscillator, there's two terms in the Hamiltonian. A kinetic energy term that's not temperature and a potential energy term that's not volume. Kinetic energy, potential energy. The kinetic energy term is just p squared over 2m, where p is the momentum. m is the mass. The potential energy is 1 half kx squared. Just Hooke's law. That's the force constant of the bond and that's the displacement of the bond from equilibrium. x. r minus r0, if you will. So there's two terms in the classical Hamiltonian. The Equal Equatrician theorem says that any quadratic term in this Hamiltonian having the form, for example, a p squared or bx squared, the internal energy of the molecule is kt over 2 for each such term. How could it be that simple? Any term at all, I just take the term and I multiply by kt over 2 and that's the internal energy? Yes. That's going to work? Well, we'll see. So the problem of applying the Equi-Partition theorem comes down to writing this classical Hamiltonian correctly, figuring out how many modes there are that are actually participating in the energy storage and then assigning each one this magic number, kt over 2, or if you've got a mole, rt over 2. It's pretty easy. Now you recall that heat capacity is just the amount of energy that we just said. So if there's one term, if there's one quadratic term in the classical Hamiltonian, then this internal energy is kt over 2 and the heat capacity is k over 2 for a single molecule or r over 2 for a mole of molecules. Yes, just said that. Okay. So let's calculate something. Now all molecules translate and their classical Hamiltonian in three dimensions for translation is just this. p of x squared, the momentum in x squared, momentum in y squared, momentum in z squared divided by two times the mass. Same for every molecule. How many quadratic terms are there? Three. One, two, three. The equi-partition tells us that the translation, that the translation, that translation contributes three kt over two to the internal energy of a single molecule. Three because there's three quadratic terms. Okay, so the internal energy of a single molecule is going to be this, the internal energy of a mole molecule is going to be this, and the heat capacity is just going to be derivative of that with respect to t. It's going to be 3r over 2, I just leave the t out. The contribution of molecular translation to the heat capacity is 3r over 2 for every molecule. Well, yeah, look at that. 3r over 2, that's why that's 3r over 2. Now, is that only approximately correct? No, that's exactly correct. I'll show you in a second. Well, I'll show you at the end of the lecture. Okay, so this is also the total heat capacity for all monoatomic gases obviously because a monoatomic gas can't store energy any other way. It can't rotate, it can't vibrate, and so this is the whole story for a monoatomic gas like neon or argon. It can't do anything else. So this is the heat capacity of a noble gas, for example. Full stop. There are no bumps. That doesn't happen, and that doesn't happen, it's just boom. Okay, for molecules with more than one atom, vibrational rotation can also contribute to the heat capacity, but vibration doesn't turn on until the temperature approaches the characteristic vibration temperature. Same thing's true for rotation. Rotation doesn't turn on. When I say turn on, I mean it doesn't contribute to the heat capacity. So for a linear molecule, let's say that we're at a temperature that's higher than the rotational characteristic temperature but lower than the vibrational characteristic temperature. In other words, rotation is turned on, but vibration isn't. At moderately dull temperature this turns out to be the case. Let's say below about 100 wave numbers in thermal energy. So the Hamiltonian, for rotation of a linear molecule now, has got two terms. It can rotate an x and it can rotate an y. These are the moments of inertia, these i's. So that should be i sub x and i sub y. Okay, how many quadratic terms are there here? Two. Two. Could it be that simple? Rotation about the x-axis, rotation about the y-axis, that's the whole story. So the internal energy of rotation now is going to be two times kT over two. Or for a molar molecule, it's two times rT over two. That's amazingly simple, isn't it? So the heat capacity then, the total heat capacity for the molecule in this temperature range, has got two times rT over two. So the total heat capacity for the molecule in this temperature range has got two contributions to it. Here's the translational contribution, three r over two, that's always the same. That's always going to be three r over two. Here's the rotational contribution, boom. It's two rT over two because the molecule's got two ways it can rotate. It can rotate an x, it can tumble, or it can rotate an y. There's actually two coordinates that can rotate orthogonally to one another. Can it rotate along its axis like this? No. But it can rotate, if it's oriented like this, it can rotate like that, or it can rotate like that. That's the x and y rotation that we're talking about. So the total heat capacity is just that some of these two things, five r over two, there's no vibrational contribution because we're way below the characteristic vibrational temperature. We said we're in this range here, we're way below the temperature where vibration would turn on. We're up here. That's five over two, yes, that's just what we calculated. So it looks like this plot probably applies to a linear molecule because if it wasn't a linear molecule this wouldn't be five r over two. It would be three r over two. That two would be a what? If it wasn't a linear molecule, it could rotate in all three dimensions, x, y, and z, right? So that two would be three, okay. Now, for a nonlinear molecule, yes, this is just what I said, x, y, and z, 3KT over two, or 3RT over two, boom. That would be the total heat capacity, 3R, if it was a nonlinear molecule. So this plot that I stole off Wikipedia obviously applies to a linear molecule. Translation and rotation. Now what about at higher temperatures where we start to excite not only rotation and translation but also vibration? As we said earlier, the classical Hamiltonian for vibration actually contains two terms, a little bit more complicated than for translation or rotation because even for a single mode, there's two terms in the classical Hamiltonian, the potential energy and the kinetic energy. And we're going to sum these guys over either 3N-5 or 3N-6 vibrational modes per molecule, right? Depending on whether the molecule is linear or nonlinear. If it's linear, it's going to be 3N-5. So following through with the predictions of the equi-partition theorem, we're going to get for each molecule 2KT over 2 per mode, or for a molar molecule 2RT over 2 per mode, 2KT over 2 per mode. Because the classical Hamiltonian contains two quadratic terms for each mode. So following through with the predictions of the equi-partition theorem, we've got translation for a nonlinear molecule. We've got rotation, right? 3 for x, y and z. And we've got a contribution from vibration which is going to be either 3N-5 for linear molecules or 3N-6 for nonlinear molecules. That's translation, rotation and vibration. So the total is going to be that. So shouldn't it be nonlinear molecules this way? Shouldn't it be what? Shouldn't this one be for nonlinear molecules this way? So 3N-5 is nonlinear molecules. 3N-5 is nonlinear molecules. Yeah, sorry. Right, sorry. Yes. Got to fix that, sorry. That should be nonlinear. OK. So for example, for a diatomic molecule, that should be 3N-6. Is that right? Yeah. So let's, 2, OK. Yeah, OK. So if in fact that was a 6, this would be 3R over 2 for the translation of the diatomic molecule. It could rotate an x and y. So it would be 2R over 2 for its rotation. And then it would be 3 times 2 for the number of atoms minus 6 times R. So this would be 5R plus, 6 minus 0. Plus 0. OK. That's what we would have gotten for in nonlinear molecules, 7 halves, R. Let's do some examples. Use the equi-partition theorem to estimate the constant volume molar heat capacity of I2 methane and benzene at 25 degrees C. OK. So the first thing that you want to figure out here is where you are on this plot. In other words, how many terms are there going to be in your heat capacity expansion? Is translation only contributing? Translation and rotation, or translation, rotation, and vibration. The way that you figure that out is first of all, if you're at a temperature, near room temperature, what do we know about whether the rotations of the molecule are going to be excited or not? How much thermal energy is there at room temperature? What's KT at room temperature? In any units that you want to use. Bless you. How much thermal energy is there at room temperature in wave numbers? Right. 207, or 200 roughly. 200 wave numbers. What is the energy spacing for rotation of a moderately sized molecule? Round numbers. Energy spacing for rotation. What's B? Is it 1,000 wave numbers? Anybody want to guess? 400? No, that's too high. A handful. One, three. One, a small number for rotational states. How many wave numbers are there for vibration? Round number. OH stretching frequency. Remember that from organic chemistry, big blob over on the left-hand side of your spectrum. What were those energies? Anybody remember? 3000 wave numbers. That's an OH stretch. Order of magnitude, a thousand wave numbers. One or two for rotation, a thousand for vibration. Qualitatively, this is going to help us figure out where we are on this diagram. By two, both of the vibrators are heavy, aren't they? Iodine is a big molecule, 126 grams per mole. And so that's a pretty low frequency. That's a pretty low energy, rather. 200 wave numbers. But in the range where we sort of expected it to be, right? A thousand wave numbers is to one sig fig. That's how much energy there is in a vibration. So do we have to think about whether this thing is going to be vibrating at 25 degrees C? Well, at 25 degrees C we've got 200 wave numbers of thermal energy. We know one or two is enough to excite rotation, so this baby is rotating. We don't have to worry about that. So we're definitely here, we're just not sure where we are here. Are we up here? In other words, is that vibration excited? Or are we down here? Is that vibration not excited? Well, we've already concluded that at 25 degrees C we've got 200 wave numbers, so we're close. Will the vibration of I2 contribute to the heat capacity? Well, if we're not sure, we can calculate the characteristic temperature from this 214 wave numbers. We just have to convert 214 to joules, divide by k, and we get 308 Kelvin. That's a little bit higher because that's not 200 wave numbers, it's 214, so that's why that's 308. And so we're right here. This line turns out to be 308, and we're just below that. And so this state, this vibrational mode of the I2 is significantly turned on. It's starting to contribute to the heat capacity. We can either assume we're down here or assume we're out here, and we always make the high temperature assumption. If it's starting to contribute, we're going to use this to calculate our equipartition theorem, heat capacity, confident in the knowledge that we're going to overestimate it a little bit, because this isn't a perfect science. We're not going to get the heat capacity exactly to three sig figs, we're shooting for one sig fig here. So we're going to say, yes, that mode is turned on because we're somewhere on this rising portion of this curve. The molecule's starting to store energy in its vibrational modes as well as rotation. And so we've got translational contribution to the heat capacity, we've got two rotational degrees of freedom because it's a linear molecule, and we've got one vibrational mode or one R. We're going to include the whole mode. We don't split it up. It's 5R over 2 plus R is 7R over 2. Actually, this is 3.5R, right? And the actual heat capacity of iodine is 3.4. So we overestimated the heat capacity slightly, but not by that much. We did a really good job of guessing what it would be, just using equipartition theorem. I like simple, intuitive things like this that allow you to get the right answer. We're always looking for better chemical intuition so that we can, at an order of magnitude level, figure out what's going on. That's the real challenge. Later on we can calculate this to three sig figs if we want. But we want to have an intuitive understanding of how big that number is, and we can get that with this equipartition theorem. Now if we left out this mode, if we left out the vibrational mode, we'd be way too low, 2.5R. So that's justification for including it, even if we're not all the way turned on here. We're here, we're not all the way up here, but we're going to include it anyway, and most of the time that's going to get us closer to the right answer. Let's look at this guy. Obviously more complicated. Here are all the vibrational frequencies that apply to methane, 1367, 1582, and so on. All right, do these vibrations contribute to the heat capacity at room temperature? What do you think? 1367 is going to be storing energy for you at 25 degrees Kelvin where the thermal energy is how many wave numbers? 200. You've got 200 wave numbers of thermal energy, and the lowest vibrational energy level of the methane is 1367. Is that 1367 going to be storing energy for you? No. 200 wave numbers, 1300 wave numbers, you need 1300 wave numbers of thermal energy before this thing gets turned on. Right? Not sure, plug in, plug in, take the lowest one of these numbers. Take the 1367, plug it in, convert it to Joules, and then divide by K, 1367 is the lowest one. 1966 degrees Kelvin, that's how hot it would have to be for this lowest mode to get turned on. So at 298 K, it's not on. You see why? The energy is too high for this vibration. It's not getting excited at 298. It's not up here. We're going to assume none of these guys is turned on. We are right here. In other words, rotations are turned on, but no vibrations are turned on. That's the hard part of this little calculation, is figuring out what do you include? You include the vibrations or not, you're almost always including translation and rotation. Unless you're at a really low temperature, you're including translation and rotation. The question is usually concerns vibrations. Which ones are turned on, which ones aren't. Some might be turned on, others are not. So the heat capacity is going to be translation plus rotation. This is not a linear molecule, so that's a 3. And then we're going to include no vibrational modes, because they're all at energies that are too high. So we're going to predict the heat capacity is 3R, and the reality is, for methane, it's 3.2. So the reality is these vibrational modes are contributing a little bit. We neglected them, but in reality, they're contributing a little bit. We missed that, but we get awful close. We get the right answer to one sig fig. Now, a toughie. Benzene has got all of these different states. Pay attention to this column. This is the actual frequency. 1100, 3000, 1300. Okay, should we include any of these in our, is benzene going to be storing energy in any of these modes right here? They're all too high. How much thermal energy is there at room temperature? 200. Never forget that number. What about here? Here's another mode, the CH out of plain WAG, 600 wave numbers. Huh, it's a little bit higher than 200, but not way higher, right? 3000, forget it, 1000, forget it. 900, very high. 684. Another one that's a little bit lower than all the rest. 1400, 1100. Why don't we see what happens if we include two modes? We'll include this guy, 684, and we'll include this guy, 651. All the rest of them are much higher. Let's see how close we get. So let's go with 651 and 684 wave numbers. Only translation, rotation, it's a nonlinear molecule. So that's a 3. If it was linear, it would be a 2. Right, 2 times 2. Because, and I would have to tell you this, there's no way that you could know each one of these modes is doubly degenerate. So there isn't one 651 wave number mode, there's two. You have to include that. Each degeneracy corresponds to another way the molecule can store the energy. So there's really four here, so it's 4R. So 7R would be the total heat capacity that we estimate if we include two modes and forget all the rest, and you can see we don't quite get one sig fig of accuracy. The actual heat capacity is 8.8. So that means benzene can use some of these modes at sort of a thousand wave numbers. They can contribute a little bit to the heat capacity. Even though we're way lower temperature. We're at a thermal energy of 200 wave numbers. We shouldn't be turning these things on until we get to a thousand, but they get tickled a little bit even at this low temperature. They contribute a little bit. That's the difference between the seven that we calculated and the 8.8 that benzene actually has. Yes, we should have, but let's go back and look. These guys, where's the 900 wave number mode? Yeah, you know, who's going to know? You can't expect it. In reality, if we include the 900 wave number mode, we're going to get closer to the right answer, but how would we know that? So this is a very approximate arc. In this particular case, it wouldn't be exactly clear which modes to choose and which ones to not choose. It can even be more complicated than this, but in many, for simpler molecules, in most cases, it's fairly obvious which modes to include and which ones not to include. Now, the way most people treat equi-partition theorem is they just include all the modes. The equi-partition theorem gives you the high temperature limit for the heat capacity. And you can see where it would. If you include all the modes, then the molecule has to be really hot before it can access all these modes, in the case of benzene. Definitely a quiz question on this coming at you for Friday. Now, with that intuition, we need to be able to calculate exactly how big each one of these guys is. And I know you've been doing that already, but you've been doing the homework, going to discussion, and talk about it in lecture just briefly today. We're going to start by talking about translation. Here's a molecule. That's the translational energy for it moving in three dimensions. This is my little... Pay no attention. And the quantum mechanical gas energies are given by the particle in the box model. It's a classical... If we know the velocity in x, y, and z, we can calculate the kinetic energy. But quantum mechanically, we use the particle in the box where these are the dimensions of the box. Lx, Ly, that should be not x. Lz, that should be not x. Sorry, y. And these are the quantum numbers for each of those dimensions. Remember this? Way back probably from fall quarter. So these are what those wave functions look like for goodness sakes in three dimensions. Blast from the past. Now we're going to concentrate attention on ideal monotomic gases just for the moment. Such gases have no internal energy in form of rotations of vibration. We'll assume that just the ground electronics state of the system used to be considered in our analysis. So one of the tacit assumptions we've been making for the last 20 minutes is that the electronic states of the molecule are not contributing anything to the heat capacity because we're only occupying a single electronic state. Now for some molecules that would be a bad assumption, but there's relatively few where you've got low-lying electronic states that contribute to the heat capacity. We talked about one, NL, right? But there's very few examples like that. Okay, because these various energy manifolds, rotation, vibration, translation, could be separated, the solution to the monotomic gas translational energy will also provide us with a general expression for the translational energy of any gas, no matter how many atoms it has. All we need to know is how big it is. Okay, so consider first of all a monotomic gas in one dimension. We've only got a one-dimensional term here. We've only got a quantum number for X and a dimension for X. The molecular partition function is just, we just have to plug this energy into our expression for the partition function, boom. Right, that's all I did there. Now these energies are very closely spaced. Consider for example, if I put this argon atom in a one micron box, one micron, how big is one micron? Well it's 10 to the minus 6 meters. Alright, how big is a red blood cell? What's the smallest thing that you can see in an optical microscope? Anybody know, you've got an optical microscope. Let's say you buy the world's best Zeiss optical microscope. You pay $12,000 for it. It's got objectives like beer cans on it. Alright, you look through it. What's the smallest thing that you can see? What's the smallest size of the thing that you can see? How many people have had microbiology class? Come on you guys. Microbiology people should know the answer to this. How big is the bacteria? About a micron. Can you see a bacteria? Yes, just barely. Yes. One micron. In an optical microscope you can see a one micron object. It doesn't matter how much you pay for it because you can't see more than a fraction of a wavelength of light. What's the wavelength of green light? Half a micron. Turns out that's the smallest dimension you're going to see. It doesn't matter how much money you pay for your microscope. If you don't pay enough you won't see that. One micron is a tiny box. It's the smallest thing that you could possibly see in an optical microscope. We're not giving the molecule very far to move. Not only that, we're only considering its motion in one direction, not Y and Z. These energies are very closely spaced considering the length of one and one and one and one. Delta E, what's the state spacing between the ground translational energy level and the first excited translational level? Let's just calculate that and find out what it is. H, M in units of kilograms now. When you're doing this on the quiz on Friday make sure that you use kilograms. That's 10 to the minus 6 meters. L is squared and we've got N equals 2. So 2 squared is 4 minus 1 is 3. That's the energy we get. 2.48 times 10 to the minus 30 joules. Big energy or small? Who knows? It's joules. It always looks small. If it was big it would be 10 to the minus 20, 10 to the minus 18, still seems like a small number. We convert it to wave numbers. We know that's small. 1.25 times 10 to the minus 7, wave numbers. A tiny unit of energy. One wave number to get the molecule to rotate. 10 to the minus 7. The state spacing is really, really tiny. Here's a log scale. Here's 2.48 times 10 to the minus 30. It's right there. As I increase the quantum number I'm looking at the state spacing for higher and higher. Look what happens. The state's getting closer together. These states are quasi-continuous. There's a tremendous density of states. They're so close together that they're almost a continuous distribution. Since that's the case we can evaluate, we can turn this summation into an integral. We're going to integrate over all the states. 0 to infinity. Just move that guy into the integral and we're going to integrate across all of the states. We're going to use a little trick to get the integral right. When we plug everything in we're just going to substitute alpha for everything here except for n. n is the integration variable. When we do that and we evaluate with q is equal to this is the expression that we get. After integration we find out that the partition function for one dimension is just the dimension divided by h root 2 pi m over beta. Now we can calculate exactly what the partition function is in one dimension as many sig figs as we want. This is the partition function in one dimension for any molecule. Any molecule. All we need to know is it's mass. Calculate the partition function in one dimension for an argon atom and confine it to a 1 micron one dimensional box at 300 degrees Kelvin. Boom! 10 to the minus 6. 1 over kT. We have to know t. It's 300. We have to know m in units of kilograms. Kilograms. Am I emphasizing this point enough? We want the mass of one atom so we divide by, so if I look on the periodic table it says 39.948 grams per mole but we're going to write down kilograms. We're not going to forget that. Or we're going to get none of the above when the right answer is actually right there. Okay? And so when we calculate this number it's 62,000. There's 62,000 translational states in this one micron box. Amazing! Alright, there are in principle 62,000 thermally accessible translational states at room temperature in this one micron box. That's a large number. Okay. What if it was a three dimensional box? Well, we just have to cube the same expression we just derived. We cube it so that's not a square root anymore. It's three halves. And now we have to include Lx, Ly, Lz. Got to cube the h as well. Okay? We cube everything because the translational partition function overall is just qx times qy times qz. They're separable. Okay? And so this is just the volume obviously. And so if I just move things around that's the expression that I've got. And there's an even simpler expression. If I substitute something called the thermal wavelength and you've seen all this already if you've been looking at the homework problems we talked about the thermal wavelength already. Alright, this is just a way to parameterize this equation a little more conveniently because if we define the thermal wavelength this way then the translational partition function is just that. Really, really easy to remember. Okay? And so we can calculate the energy from this by using the equation that we derived the least intuitive equation in chemistry. Okay? q is just this so we can plug that in for q. Where did we do that? There we did it right there. There's q. Okay, so it's just thermal wavelength cubed over v. dd beta of v over the thermal wavelength cubed with the thermal wavelength cubed equal to that. Alright, and if you trust me that's what we get for the internal energy for it should be a mole. This should be the average internal energy because we've got k here unless we use n equal to Avogadro's number. Okay, so the bottom line is this is 3 1⁄2 RT for one mole. Well, we knew that from aqueopartition theorem, didn't we? It's 3 1⁄2 RT for translation. Now we've proved that that's exactly right. Okay, what is that? 100 slides. Pretty good. So we'll see you on Friday.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 05. Thermodynamics and Chemical Dynamics -- The Equipartition Theorum -- Instructor: Reginald Penner, Ph.D. Description: In Chemistry 131C, students will study how to calculate macroscopic chemical properties of systems. This course will build on the microscopic understanding (Chemical Physics) to reinforce and expand your understanding of the basic thermo-chemistry concepts from General Chemistry (Physical Chemistry.) We then go on to study how chemical reaction rates are measured and calculated from molecular properties. Topics covered include: Energy, entropy, and the thermodynamic potentials; Chemical equilibrium; and Chemical kinetics. Index of Topics: 0:02:34 In Real Molecules... 0:05:51 Constant Volume Heat Capacity 0:11:37 The Equipartition Theorem 0:39:40 The Translational Energy of a Classical Gas Molecules
10.5446/18937 (DOI)
Okay, the certain atom has a three-fold degenerate ground state and non-degenerate electronic excited level of 3500 wave numbers and a three-fold degenerate level of 4700 wave numbers. This is the last example we did on Wednesday, but we did it fast. So let's just do it again briefly, remind ourselves how this works. We want to use the form of the Boltzmann distribution law that we derived to figure out what the population of these different energy levels are. That population will depend on the degeneracy of the energy level. Here the degeneracy is three. Here it's one. Here it's three. We can factor this information into our answer. This is the version of the partition function that we want because it's the version that contains explicitly the degeneracy. This is the degeneracy of energy levels. Here's an energy level. Here's an energy level. And here's an energy level. The degeneracy of this level is three, one, and three. So this summation means that we have to sum over these three energy levels. There will be three terms in our summation. If there were five energy levels, there would be how many terms? Five is the correct answer. All right? Three, that's the degeneracy. Exponential. What's that energy? It's zero. Okay? And the first excited state is at 3,500 wave numbers. The second excited state is at 4,700 wave numbers. We need to work out what this is in units of joules. All right? We have our handi-conversion factors. 8065.5 wave numbers for EV. 1.602 times 7.5 cent joules for EV. Boom. There's the temperature that we care about. 1900 Celsius. A Kelvin, rather. And so this 2.649 is what? All right? It's the exponent here. There should be a minus sign in front of it, and that's the exponent on this E. Okay? And so when I evaluate that E to the minus 2.649, I get 0.07069. Okay? And obviously this guy is just going to be three because that's zero. All right? So this exponential is just one. And so 3 plus that plus that is 3.156. And the first question we always want to ask when we're calculating the partition function is does, is the order of magnitude of this number of thought right or is it way off? All right? Have we made an unbelievably stupid mistake or could the answer be correct? All right? Does it make sense? We always want to ask that question. What is the partition function? It's the number of thermally accessible states at the target temperature that we're talking about. Okay? And so if we want to figure out if this is, if this answer could possibly be right, we need to calculate what the thermal energy is in you that's some wave numbers. All right? Let's calculate what the thermal energy is. What is the thermal energy? It's K times T. K is 1.381 in semi-3 joules per Kelvin. It's 1,900 Kelvin. 1,900 degrees Kelvin. And so that product is 2.62 times 10 to the minus 20 joules. As always, any answer in joules is counterintuitive. It's always 10 to the minus 19, 20, 24. Joules are not a unit that it's easy, at least for me, to grasp. So I always want to convert to EV or wave numbers. All right? In this case, we're just converting to wave numbers as we've done before. 1,321 wave numbers is what that turns out to be equal to. 1,321 wave numbers. Okay. So here's my energy level diagram. Here's my ground state, triply degenerate. Here's my first excited state. Here's my second excited state. Here is that 1,321 wave numbers. That's the amount of thermal energy that's present in the system. Okay? And so, of course, these, so now we have to ask the question, what do we expect the partition function to be in this system at this temperature? Well, these ground states are definitely going to be thermally accessible. The ground states are always going to be thermally accessible when they're at zero energy. They would be accessible at 1 degrees Kelvin. All right? So we know that the partition function is going to be at least three. And then we can ask, are these excited states going to be populated as well? Well, you can see we don't have enough thermal energy to significantly populate these excited states and so we expect there to be a little bit of population of those states because what does that Boltzmann function look like? It's a decaying exponential. Okay? So there's going to be some populate. This thermal energy doesn't cut off in a hard line here the way I've drawn. It sort of smears now. Yes? So does that mean that for this particular example the maximum Q could be a seven? Yes. Did everyone hear that? Does that mean for this particular example the maximum that Q could be a seven? Yes. Is the answer. Okay. So we expect a little occupation of 3,500 and 4,700 states even though we're not close to them there's a tail on this thermal energy that we have to remember about. I'll draw it as a pink box but we know there isn't a hard cut off on that thermal energy that extends exponentially to higher engines. Okay so qualitatively this number makes sense because here's three states that we know are going to be thermally accessible and then there will be some fraction of these four states here that are also thermally accessible. So 3.156 qualitatively that's what we expect here. If we had gotten four that should be a red flag. That sounds too high. Alright? We shouldn't really have that much occupation of these exciting states. And that level of intuition is only going to come from doing a bunch of problems. Right? After you've done a bunch of problems it'll be clear to you that no, there's no way that four is the right answer and is not the right answer. Two can't be right. Alright? We know there has to be a partition function of at least three. Okay? So look at the answer and ask yourself, we know the answer has to be between three and seven. Alright? And three and a fraction is about what we expect. Let's do another one. These are the electronic states of carbon it turns out. Here are the energies, here are the degeneracies of those states. Now what we want to do is calculate the fractional population of every single one of these levels at 6000 degrees Kelvin which is the temperature of the sun. We've got carbon atoms in the sun. What electronic states are they in? So we start with this equation that we derived on Monday, Wednesday. Okay? There's our partition function. What we want to evaluate is the fractional population. Right? The number of atoms in the state divided by the total number of atoms in each one of those four states. Okay? And so if I divide by n, that n just moves down here so this equation doesn't have an n in it anymore. Alright? And this guy here, that's just our partition function q. Okay? In this case, two will have four terms. Yes, one for each one of these four energy levels. The first one is going to go early with this level. The second one with this level right here and the third one here. This is really unnecessary. Okay? How do we calculate them? Alright? Here are the four terms. The first number here is the degeneracy of each state. Non-degenerate, triply degenerate, fivefold, fivefold. Alright? One, three, five, five. Then we've got the exponential. That contains a different energy for each state. That's the energy of the state. Okay, so for this guy, the energy of that first state is at 16.4 wave numbers. Pretty low. Unbelievably low. Alright? There it is. And here I'm just doing a unit conversion to get to joules again. Alright? So I'm evaluating that term right there. Alright? And of course there will be a minus sign in front of this 3.93124 when I put it in that exponent there. Put a minus sign on it. Okay? And so it ends up being 2.99. This guy ends up being 4.95. This guy ends up being 0.43. I add them all together to get the partition function. I get 9.37 as the answer for the partition function. Okay? So 9.37 of the 14 total states, there's 14 total states. It's 9.53 and 1. Alright? Are thermally accessible at 6,000 degrees Kelvin? Sounds too small. Right? 6,000 degrees is an enormous temperature, isn't it? So as always, does this make sense? The only way to know for sure is to calculate that thermal energy again. Alright? to 100 degrees Kelvin times K gives me this number in joules which once again is completely useless to me because I have no intuition about joules. It's always 10 minus 20 something. All right, but when I convert it to wave numbers rather, I get 41 72 wave numbers. That's a number that I do have a little bit of intuition about. All I've drawn here are the electronic states of carbon, right from that table that I just showed you. Here's the ground state. It's non-degenerate. Here's the first excited state. It's only at 16.4 wave numbers, really, really close. First excited state's at 43 wave numbers. Then there's a discontinuity here in notice on this energy axis, right? Because that's 5,000. That's the highest excited state. 10,103.7 wave numbers. Where's the thermal energy on this diagram? It goes all the way up to 4173. That's 6,000 degrees Kelvin. That's the thermal energy and it tails to higher energies, of course. Okay, so what did we calculate for the partition function? 9.37. Does that make sense? 1, 2, 3, 4, 5, 6, 7, 8, 9. Does it make sense that all of those are thermally accessible? Yeah. 0.37, right? Hard to tell. All right. What that 0.37 is telling us is that there's some occupation of this guy, even though he's way the heck out there. There's some occupation. It's a little surprising. That's a big energy gap. 5,000 wave numbers. 6,000. Okay, but we can use our intuition to tell us that this 9.37 is right about what we would expect. Right, also close. Yes, 9.37 does make sense. Okay, that's not what we were asked by the way. We weren't asked to calculate the partition function. We were asked to calculate the relative populations of each one of those levels. Okay, and so we want to evaluate this. This is the number of atoms in the ground state divided by the total number of atoms. That's the relative population right there. Okay, and so we take the first term. Here's our expression for the partition function. We take the first term, put that in the numerator. All right, divide by the partition function, that is the fractional population of the ground state. Now we take the second term, put that in the numerator for the N1 state, right, the first excited state. 2.99, divide by the partition function. That's the fractional occupation of that state that was at 14 weight numbers, and so on. 4.95, 0.43, boom, boom, boom, boom, add them up, and you should get 1. That's a check on whether you did this calculation correctly. Right, because obviously total population, right, once you count for all of the atoms and all of the states, it should equal 1. All right, so 0.9999 means there's a little rounding here. Okay, now we've used the Boltzmann equation before in the CAM1, maybe in organic chemistry, but we've never, correct me if I'm wrong, been able to account for degeneracy in this way. This is the first time that you've ever been able to accurately calculate the population of energy levels like this. This is a super important thing to be able to do. Okay, very straightforward, very powerful. Now, it will be obvious to you, I hope. Here's a molecule, here's a molecule, here's a low temperature. There's not enough energy here to occupy very many of these excited vibrational energy levels that are drawn here, right, we're almost all in the ground state, right, and at a higher temperature, we might occupy a bunch of different states than the same molecule, and if I calculated the number of molecules that have this configuration right here, why there would be 4, and if I calculated how many and have this configuration right here, there would be 24. So, qualitatively, we understand that energy is going to be correlated with W and with the partition function somehow. So, let's see if we can figure out what the relationship is between the energy and the partition function, right? We're going to keep coming back to this partition function in stat net. It's the central object, all right? So, this is the average energy, we're going to call it the average internal energy because it's exclusive of the translational energy. In other words, an atom might have some energy because it's zooming around, it's got some kinetic energy, this is the internal energy. This is the total energy to get the average energy, I divide by the total number of atoms or molecules, right? Right, so that makes sense. Total energy divided by number of objects that have energy is the average energy per object, right? This is my expression, what is this? This is the sum of the energy for every, this is the number of atoms or molecules that have a particular energy and that's the energy, and so if I multiply those two things together and I sum over all the different energy levels, I should get the total energy. And that's just it, okay? Now, we have an expression for N already from both of the distribution levels. So, if I plug this expression in for that N right there, I think you can see immediately that this big N and that big N right there, they're going to cancel. Okay, and so now when I do that substitution, here's the equation that I get for the average energy, right? All I did is make a substitution here for that N, boom. Okay, and now Q is characteristic of this whole distribution of states and molecules at a particular temperature, so I don't need to include it in this summation, it can go out front. It's a constant for every one of these states, it's not going to change. Okay, so this is what my expression looks like, all I did is move 1 over Q out front. Now, our definition for Q is this, look at something, I won't call it interesting, but note something. If I take the derivative of Q with respect to beta, right, the derivative of Q with respect to beta, this is Q, if I take the derivative with respect to beta, what are the rules for that? Remember, the rest of this guy is going to move out front, there it is, okay, and if I put a minus sign here, that minus sign is going to get extinguished. Sorry, but I don't know why I said this. Thank you. Okay, so does everybody agree that this is equal to this right here? All right, the reason that's interesting is because this right here is that, right, dQd beta, minus dQd beta, minus the derivative of the partition function with respect to beta, it seems like a very abstract thing to be evaluating in the first place, all right, but my goodness, that gives us the average energy. All right, so this is an equation in your book, I don't know, I should have laid it, I don't remember which one. All right, but the average energy is just equal to minus 1 over Q dQd beta, there's that minus sign, that's what that minus sign is right there, and if I want to know what the energy is for n molecules, I put big n here. Often we want to know the energy per mole. Most of the time we want to calculate energy per mole, so that number is Avogadro's number for goodness sakes. Okay, you ready, let's do an example. Here's a molecule, nO. Two electronic states, one at 121.1 wave numbers, one at zero. That's a ground state, both WD generate. The atom of this WD generate is 200, but calculate and plot the electronic partition function of NL from zero to 1,000 K. Evaluate the term populations and the electronic contribution to molar internal energy at 3 degrees Celsius. Who's problem for next? Okay, 1,000 K, let's calibrate ourselves, so we have some intuition about where we're going with this problem, where is that compared to 121 wave numbers, boo. 121 wave numbers is down here, 1,000 K turns out to be 695 wave numbers, okay? So what are you going to predict? The partition function is going to be at that temperature. Yes? Almost forward. Almost forward because I think it reaches forward at infinity. Yes, did everyone hear that insightful answer? Four is the highest it can be, all right? It can achieve four only at infinite temperature. So it's going to asymptotically approach four as the temperature increases, but it's never going to get there, right? Okay, let's see what we come up with. Here's the partition function, so hold that thought. Two levels this time, right? Both W degenerate, boo. There's the energy. We're not going to continue to write this unit conversion on the screen for wave numbers to joules. I trust that you can do that now, and you can do it any way you want, you don't have to use my inversion factors. There's a million ways to do it. Oh, there it is again. All right, here's my plot of Q as a function of temperature. All the way up to 1,000 degrees, check it out. Does it look like it's approaching four asymptotically? Sort of does, right? What is it here, maybe 3.7, 3.65, right? Intuition-wise, that's where we sort of expected it to be, maybe a bit higher than that, because my goodness, we've got way more thermal energy than 121 wave numbers, don't we? Right, so you might be a little surprised that it's not closer to four, but what you'll find as you do more problems is, right, your intuition has to be calibrated a little bit. All right, it's going to be more than 3.5, but less than four, it's sort of in the right range of 1,000 degrees. Now, we want to calculate these term populations. The ground state is n0 divided by n. In the numerator, we have the first term of our part, here's our whole partition function, right? The ground state is the first term over the whole partition function, boom, 2.64. 64% of the molecules are in the ground state, even though, okay, well, we're talking about 300 degrees Kelvin here, where's 300 degrees Kelvin on this darn? 300 degrees Kelvin is more than 121, but no more than 1,000. Okay, just to calibrate you, let me go back to where I was. Okay, so 0.64, 64% of the molecules are in the ground state, even though there's more thermal energy necessary than is necessary to populate that excited state, right? Most of the molecules are still in the ground state. That's interesting. And the excited state has what have to have, of course, the other 36%, because there's only two states, right? And those two numbers better add up to 1. Okay, I think this is a little counterintuitive because that number seems high when you've got, almost a factor of two more thermal energy than you need to populate that excited state. You still have most of the molecules in the ground state. Okay, I think we're going to stop right there. We're not going to do the rest of this one. We'll do it on Monday. Okay.
UCI Chem 131C Thermodynamics and Chemical Dynamics (Spring 2012) Lec 03. Thermodynamics and Chemical Dynamics -- Energy and q (The Partition Function) -- Instructor: Reginald Penner, Ph.D. Description: UCI Chem 131C covers the following topics: Energy, entropy, thermodynamic potentials, chemical equilibrium, and chemical kinetics. Index of Topics: 0:00:19 Example: A Certain atom... 0:03:35 Calculating Thermal Energy 0:04:22 Energy Level Diagram 0:15:45 Partition Function
10.5446/18934 (DOI)
Good morning. Welcome to our last PCAM lecture. So I know I'm going to miss you guys. This has been a really great class. I've been really happy about, you know, how much everybody participates and is really excited about learning PCAM and that's really cool. It's been a lot of fun. So thanks for being such a great class. Today we're just going to do a review of what's going to be on the final. It is completely cumulative. It covers everything that we've talked about in the course, which is really a lot of stuff. So we're just going to go back through and, you know, not go into anything in too much depth, but talk about everything that's going to be on there. And if we run out of time, which we might, the slides are posted and I'm going to have a lot of office hours next week. I still don't have it posted when I'm going to do it, but I'm planning to have, you know, at least one every day, Monday through Thursday, possibly more. It just depends how I can work out the schedule. Just a quick poll. Who has a final Monday morning? Who has a final Monday afternoon? Tuesday morning. Tuesday afternoon. Okay, looks like Tuesday is a good day. How about Wednesday morning? Wednesday afternoon? Thursday morning? Thursday afternoon. Okay, that's unfortunate. That's a bad schedule. Okay. Yeah, well, I figure, you know, people are mostly taking the same classes, so I'm trying to avoid scheduling the office hours when a lot of people won't be able to make it. Okay, so it looks like Tuesday is a pretty good day if I'm going to do extra ones. And then I'll still do, you know, a bunch of last-minute stuff Thursday, but it's too bad that a lot of people will have a final. When is it over? Six. Ouch. Okay, we'll see what we can do. I have to check the schedule. Okay, another thing I want to mention before we get started is that a lot of people have sent me emails about the seminar extra credit sheets and exam regrades and people are getting anxious because I haven't worked on them yet. So I was out of town the last three days, and I did not have a lot of internet access. So I could see emails on my phone sort of when the plane landed and whatever, but, you know, I've been traveling a lot and or I was in an eight-hour meeting reviewing grant proposals. So I just haven't had a lot of internet access to upload stuff. So I totally get it. I know that it's anxiety-provoking that you turned in your stuff and you don't know whether you got credit for it or not. The issue is I'm just behind. So when I started doing this extra credit thing with the seminars, I didn't know everybody was going to do it. And that's awesome. I'm glad that you're doing it. But it means that, you know, whenever you give me a piece of paper, it turns into a Russian novel when everybody does and I am just behind. So one of my plans for the weekend is to get caught up on the stuff. When I get all the seminar extra credit things done, I'm going to post something. I'll post it on the Facebook page and the class website. And, you know, I'll say, okay, I think I have all of them done now. And at that point, if I still didn't get yours, then please do send me an email and I'll go look through the stack again. Same thing with the exam regrades. The deadline to ask me about it is tonight at midnight. I'm going to wait until I have all of them and then just do it all at once. That'll make sure that it's, you know, that I'm definitely doing it consistently and also hopefully get it done quickly. And again, I will post when I think I have them all done and then if I missed yours, go ahead and let me know at that time. I know it's tough not to know what's going on and I'll try to get caught up as soon as I can. All right. Does anybody have any more questions about general stuff before we start reviewing for the exam? Okay. Let's do it. All right. So as I said, this exam is really cumulative. And before we get into the older material, I want to just talk about the canonical ensemble a little bit. So we got about this far, you know, talking about the, the connections between the canonical ensemble and just the standard partition function that we've looked at. But we didn't quite get to finish up and then in between, we had example day which, by the way, I heard John Mark's lecture debut was awesome. So that's great. I'm not surprised but I'm glad it went really well. Okay. So we have, we learned about the canonical distribution and the canonical partition function. So this is for our canonical ensemble which, remember, is a collection of little individual ensembles that are all at the same temperature. And the thing that we like to use this for, or its key feature, is the fact that the canonical partition function is more general than our normal partition function. And that's because it doesn't assume that all the particles are independent. And so that is really useful when we want to use it for studying condensed phases, so liquids and solids, or even gases that don't behave ideally. So it's a lot more general and it can be used for more things. And, you know, obviously we're about at a time this quarter, but so we're not going to do too much with this, but I want to make sure that we cover it to set you up for next quarter. So next quarter with Dr. Gerber, you're going to do a lot of working with the canonical ensemble, statistical mechanics, and get into thermodynamic properties. So the last thing that we need to talk about is the fact that you can get bulk properties of the system from the partition function. So the average energy of one of our little member ensembles is, it's given by, you know, we've got the, it's just the average energy for our individual ones. And we can write that down in terms of the relative populations, which, again, we remember what that is. And what would be really nice is to have this in terms of just Q, because then it's a lot more useful. And so we can substitute using the derivative of Q with respect to beta. I remember beta is 1 over kT. And we know that that equals dLn Q d beta. And so the result we get is for distinguishable molecules, big Q is little Q to the n. So distinguishable molecules could be, it could be, you know, molecules that are in a crystal lattice so that they occupy a particular position all the time. So it could be that they're all the same, but they're in a particular point in space. And so you can distinguish them that way. It could be that they're all the same, that they're not all the same type of molecule. So you might have a solution of, say, ethanol and water. So they're moving around, but some of the molecules are distinguishable because they're different molecules. And in that case, Q is little Q to the n over n factorial. And so this is stuff that I basically just want you to hold in your mind for next quarter, for when you work on thermodynamics with Dr. Gerber. It would have been ideal if we had time to get to it in the last lecture, but we didn't quite. So that is what we're going to say about statistical mechanics. And with that, let's move on to the review for the final. Okay, so what do you need to know? So the first thing is being able to assign molecules to a point group. This is really important because there are lots of types of problems where you have to assign stuff to the right point group in order to get the right answer. And that will be the key to the whole thing. There might be, you know, maybe one problem that's not worth very much that says, you know, just assign something to a point group, but most of what's going to be going on is just having to use this information to learn something about bonding or molecular motion or, you know, whether certain kinds of orbitals can overlap or whether certain kinds of wave functions can overlap. And this is something that you definitely need to review if you had trouble with it or even if you didn't have trouble with it and you just haven't looked at it in a little while. It's an important skill. And so remember things like, you know, we've talked about different objects and how they transform under the operations in a point group. And so, you know, I bring up this OCL2 example again that we saw before in class where depending on whether the phase of the orbitals is in phase or out of phase, they have different behavior with respect to the operations and so you get different matrices for those operations. This is something that you definitely need to be able to do. And hopefully you see a little bit more why it's important now that we've talked about the rotation statistics of things like molecules that have fermions versus bosons where you might have them transforming differently under this rotation operation. So this is something that you should definitely be able to do. So with a basis set that's described in words, you should be able to figure out, you know, how to draw it and write down appropriate matrices for these operations. And so again, depending on whether the orbitals are out of phase or in phase, in this case, you get different answers. Another thing that's important to point out is that in this particular case, we talked about the basis set being the p orbitals. So you treat them separately. We've also seen other examples where the basis set is a molecular orbital consisting of a linear combination of those p orbitals. That's a little bit of a subtle difference, but you definitely have to pay attention to it. So if our basis set is the linear combination of the orbitals, then you treat it as just one thing and, you know, how you count what happens with the operations is a little different. So keep that in mind. We also need to do things like looking at the molecular motion and determining which vibrational modes are IR and Raman active. So remember the general procedure for doing this. So we have, you know, some molecule we want to learn about its IR and Raman active vibrational modes. The first thing to do is set up your basis, which is going to be X, Y, and Z unit vectors on each atom. Mistakes that I saw people make on the first couple of exams included, you know, not putting a basis set on the central atom and just doing the outer ones. That's, you know, make sure you don't make that mistake again. The issue is we're interested in the relative displacements of all of the atoms, and so we have to include all of them. So of course getting the molecule into the right point group is an important part of this. You need to be able to set up your basis and then look at this and determine whether you can use the shortcut or not to just get the character. I'm probably not mean enough to give you one where you can't in the context of a molecular vibration problem. Maybe in some other context I might, but in this case it's probably too long. And then, you know, be able to write down your reducible representation representing the molecular motion and reduce it to get the modes. And so again, we should have nine, we have nine elements in the basis because we have three unit vectors and that's going to be three times the number of atoms in the molecule. So that's a good way to check yourself. You should get nine symmetry species in the final answer then. And then what we have to do is go through and take out the translations and rotations because those are something that we don't see in vibrational spectroscopy. But they're accounting for some of those symmetry species. And so you do this by looking at the character table and finding the symmetry species that correspond to X, Y, and Z and then RX, Ry, and RZ. And then the vibrational modes are whatever is left over. So here's another note about this. So in this case we only have symmetry species that belong to A and B type representations. That means they're non-degenerate. When you have things like E which is doubly degenerate and T which is triply degenerate. If you have say a T something symmetry species and X, Y, and Z all belong to it, that takes out the T one time from your representation of what's left. So if you have three T in your reducible representation and then you see that T is X, Y, and Z, you have two T left for the vibrations. Does that make sense if it's not clear? Ask me about it. That's something that people got a little bit confused on last time. Like if you have an E representation and you're removing X and Y, that only removes one E from whatever you have left, not two because they are doubly degenerate. Okay. Yes. Oh, okay. There wasn't a question. All right. Okay. So what's left are the vibrations. And you figure out whether they're IR or Raman active by looking at the character table and you see whether there's a component of, if it belongs to the same symmetry species as a component of the dipole moment, that being X, Y, or Z. And if it does, then it's IR active. And if it's Raman active, that means it belongs to the same symmetry species as a component of the polarizability. So something like X, Y, Y, Z, X squared minus Y squared, Z squared, something like that. And of course, the symmetry species can be both or neither or one or the other. So I think we've done plenty of examples of these in class and they've showed up on the other exam. So just make sure you go back and review. And, you know, if you made mistakes, be sure that you understand how to do it. Okay. So that's what we have to say about group theory. We also need to talk about different kinds of spectroscopy. So you should know sort of the big picture of spectroscopy. What are you measuring when we talk about different kinds of spectroscopy? And we've talked about quite a few. So we have electronic spectroscopy. We've got vibrational spectroscopy, IR and Raman. We also have rotational spectroscopy, which could either be, you know, direct rotational spectroscopy or rotational Raman. And you need to know how the mechanisms for these things are different from each other. You know, how you're physically measuring a signal. And we've also talked about NMR, which is different from all of these. And you should also know the relative energies that are involved. So bless you. So if you only have enough energy to excite rotational states, you should know that everything vibrational is in the ground state. But the opposite isn't true, right? If you're exciting vibrational states, then you get the rotational excitations along with it. Okay, again, you should be able to look at the Raman spectrum and know what's different about it than the absorption spectrum. And again, there are two kinds of Raman spectroscopy, rotational and vibrational. If it's not mentioned which kind it is, then it's vibrational. That's the one that's most commonly used. But rotational Raman spectroscopy does come up as well. Okay, so once you have these either vibrational or rotational spectra, we should be able to analyze them. And so remember that if you have a, this one's an IR spectrum. If you have the IR spectrum, you should be able to make some, the correspondence between that spectrum and the energy level diagram. So remember that the peaks in the spectrum correspond to transitions between the levels, you know, not the levels themselves. So given one of these things, like the, either the potential energy diagram or the spectrum, you should be able to draw the other one and say, you know, which levels correspond to what? Also, you should be able to explain sort of general features of the spectrum. You should know about the selection rules, both the gross and specific selection rules for all the kinds of spectroscopy that we've talked about. This is just another picture of what these things look like. So remember IR spectra are sometimes plotted like this with the peaks going down. They're sometimes plotted with the peaks going up. It doesn't matter. It gives you the same information. Okay, so given these kinds of spectra, we should be able to calculate different things from them. So for a simple molecule, we should be able to get a bond length from this. And so things that you need to know include the spacing between the lines for the different rotational states is 2B or it's 4B if you take it across the middle where there's no peak in the center because the J equals 0 to J equals 0 transition is forbidden. So if you do that, it's 4B. And so based on the rotational constant, you should be able to get the bond length using these equations that we've used before. You should also be able to estimate the force constant which is something that we did on the last exam. And for the force constant, you just need the fundamental frequency of the whole thing which of course is the point in the center where there would be a line if the J equals 0 to J equals 0 transition were allowed. And I also want to point out that these things could be in all kinds of crazy units. They could be in wave numbers or frequency or energy or, you know, some combinations of these things. And no matter what it is, you should be able to convert back and forth and use all of these things. It's an important skill because when you read the literature, you'll actually see these things written down in different ways and it's important to be able to convert among them fluently. It would be nice if everything were consistent and in units that make sense but alas, it's not like that. Okay, so from, this is from the practice exam. There are questions like why isn't there a peak in the middle? You know, again, that's because the specific selection rule for rotational transitions is that J has to equal, delta J has to equal plus or minus 1. And getting the energy for the new equals 0 to new equals 1 transition is just, you read off the point in the center of the spectrum where there's a line missing. You know, or if you had a molecule that had a peak there because it has an unpaired electron say, then that would be where that is. Other questions include, you know, is our molecule a perfect rigid rotor and how do you tell? So again, this was on the last exam. If the spacings are really exactly equivalent, then you can say this thing behaves as a perfect rigid rotor and if they're not, if it's stretched on one side and squished on the other, then you know that you have centripetal distortion and it's not a perfect rigid rotor. So what that means is as the molecule rotates really, really quickly, then it starts to stretch out and it doesn't behave as an ideal case. So in these particular set of examples, I would say that CO2 actually looks like a pretty good rigid rotor. The spacings are quite even and N2O really doesn't. Question? Yes? What in the intensity matter also? Do we have to make that the intensity on one side? Well, the intensities, yeah, that's a good point as to whether they're the same on both sides, but the intensities mostly come from the Boltzmann distribution of the populations and we're going to talk about that a little bit. Yes? So can we say that CO2 is a perfect rigid rotor? Or in this case it's still not a perfect rigid rotor? If you said that CO2 is a perfect rigid rotor, for this one, I mean it can be hard to tell, but for this one, it does look like the line spacings are really even and particularly, you know, maybe a better question would be compare them. Which one is a better rigid rotor? And then, you know, then it's clearly CO2. Okay, so back to the question of the line intensities. If we say these spectra were collected at room temperature and then we lower the temperature to 10 Kelvin, how would they change? How would they look different? And in that case, we would see more intensity in the lower transitions and also the distribution would sharpen up because we're filling fewer states. There are just fewer states available at lower energy. And so we've also seen this picture before. Here's the, how the populations look different at low temperature versus at high temperature. So notice that the scales on these are different. It's a little bit hard to see, but it goes from, you know, 10 to 100 gigahertz on the top and 0 to 1,000 on the bottom. So that's just showing you that at higher temperature, there are many, many, many more states that can be populated than at low temperature. And also, we see, you know, so at low temperature, we see everything piling up in just a few states. So that's something that you should be able to explain and, you know, be able to sketch what it looks like sort of qualitatively. And so that's most of the effect that accounts for the distribution of the levels. Okay. And we've already talked about Raman spectroscopy. Okay. So we went through that pretty fast, but there's a lot of information in there about vibrational and rotational spectroscopy. So that's something to spend some time reviewing. Look at your exams from this corner and make sure that you know how to do the problems that were there. You know, also look at the practice exams from a couple of years ago that I posted and make sure that you know how to do those problems. And then we get to electronic spectroscopy. So make sure that you know how to write term symbols. So here are the rules for term symbols for atoms again. Probably not going to have a question directly about term symbols for atoms because I know that you covered it last quarter. What we're more going to be concerned about is the term symbols for diatomic molecules. But of course you have to understand how to do the ones for atoms in order to be able to do that. It's also important to remember Hun's rules in determining which of these states are lower energy. So a lot of times for a particular electron configuration, the electron configuration itself will be ambiguous. You can get different arrangements of electrons for the same electron configuration, which is of course why we need term symbols in the first place. They're a lot more specific than just the electron configuration. And so you should be able to use this to figure out which one is the ground state. And then the part that you're actually going to have to do is figuring this out for diatomic molecules. And so this will be pretty similar to what we did on the last exam. You get some diatomic molecule. You have to draw the molecular orbital diagram and figure out the properties of these electrons and see, you know, how many electron configurations you can get out of a particular, well, how many arrangements you can get out of a particular electron configuration and then write down the term symbol. So again, make sure that you know how to do the example from the last exam. I think the TAs did a really excellent job of this in the review session last time. So look over your notes from that. And also remember things like the even odd rule and, you know, how to determine whether particular transitions are allowed or not by symmetry. And so for electronic spectroscopy, there are going to be two considerations for transitions. So one is just are the transitions allowed or not by symmetry and that's something that you get by looking at the symmetry of the wave function. You have to take into account, you know, both G and U and if it's a sigma term plus and minus. And then the other thing that we have to remember is Frank Condon factors. So you should know how to write down an expression for the Frank Condon factor between pairs of states. Again, you're probably not going to need to evaluate that because you don't have time to do hard integrals during the time of the exam and honestly you're not going to have a lot of extra time to do much of anything because it's going to be long. So just like the midterms have been. It won't be twice as long and you have twice as much time. So that's a little bit better. But, you know, just I want to take this point to say make sure that you read the directions really carefully because there will be things where I'm trying to save you time by either giving you an intermediate step or saying only do this part of it. Just make sure you read them really carefully and if you're confused about it ask. I, you know, I do stay here for the exam and run around and answer questions and, you know, if you ask something that is information you should already know I'll just say sorry you already need to know that. There's no harm in asking though. Question? Is it possible to provide us with like a practice final so we can just know the format? Well, you kind of do know the format. So I've already given you two practice midterms and then we just have the stat mech questions from, you know, that the quiz kind of serves as that and I gave you a bunch of practice homework problems. So I think by now you do pretty much know what the format is going to be like and you know sort of how I ask questions. So I'm not going to give you a separate practice final but I do think you have a lot of practice problems that you can work with. So, yeah, read the directions very carefully. Ask if you don't understand. And the other thing is when you get the exam, you know, take a deep breath and read the whole thing and make sure that you do the easiest problems first because what's easy and what's hard is a matter of opinion. Some people understand some concepts more readily than others and so I want everybody to do their very best. So make sure that you do the ones that you think you can do really quickly first and then go back and work on the things where maybe you need more time. Because I don't want people to get into a situation where you spend all your time on something that's really hard and then you find that, you know, oh, there was an easy one that you could have done quickly. So just some general exam strategy. Okay, so we definitely need to know about selection rules and this slide on selection rules is really general. This could be for just about anything. So it depends on the, the selection rules depend on a transition dipole and there are different ways to look at this. Sometimes you can just do it by inspection basically like if you have say the harmonic oscillator wave functions and you can just look at whether they're even and odd and remember that the transition dipole, you know, the dipole moment operator is always odd. You know, it either goes as X, Y or Z. And then if you can't just do it by looking at it and say, okay, they're even and odd, then you need to do it by looking at the character table. And so in that case, you need to find the symmetry species of each function and then multiply the characters for all three of them together and then you'll get some reducible representation in general. Sometimes you're lucky and you get something that looks like a reducible, an irreducible representation that's already on the character table and you can just look at it. But sometimes you'll get something that's a reducible representation. And then what you need to do with that reducible representation is see if there's a component of A1 in it or, you know, if it's not called A1 in that point group, whatever the symmetry species is that has 1s under every operation. So again, that's the one that's invariant to all transformations. And so what we want to see there is that if you have things that actually overlap and are coupled by that dipole moment operator, then they're going to overlap no matter how you move the thing around in space. So we just have to make sure that there's a component of the symmetry species that's invariant to all transformations in order to say whether that exists or not. Question? So if it has A1 then the integral does not vanish? That's right. That means that it has a component that's invariant to all transformations and the integral does not vanish and you get an answer. Another mistake that I've seen people make with us on the previous exams is that that doesn't mean that the overlap is 1. So you can tell whether it's 0 really easily from this treatment. But you don't know what the value is. So all you can say is it's not 0. So, you know, if people put it's equal to 1 for these things, you got some partial credit but be careful. You know, you don't actually know what the value is just from the symmetry treatment. It could be really tiny. It could be 1. You don't know. You have to do some more sophisticated computation to be able to get that. Okay. So continuing with electronic spectroscopy, we looked at birds sponer plots and how these compare to the potentials associated with electronic spectroscopy. We've seen a couple of different examples of this. There was an example on the practice exam that I gave you. The exam on the actual test was maybe a little bit harder because it was a weird example where the thing had a break in the slope. So, you know, again, just make sure you read the directions and look at what the question is actually asking and know your equations for how this plot pertains to the potential energy diagram of the molecule. Okay. So that's what we have done with electronic spectroscopy. You need to know how to write your term symbols and be able to figure out which transitions are allowed. You need to figure out the Frank Condon factors and also be able to use some of these plots to find out some properties of the molecules. We've also talked about NMR and that's a little bit different from these other types of spectroscopy but what you need to know about it is kind of similar. So there are both sort of theoretical things that we need to understand about it and then also in a practical sense being able to look at the spectrum and learn something about the molecule or look at the molecule and figure out what its spectrum is going to look like. So important things to know here is how the Zeeman effect works. We have our nuclear spins. They're in all kinds of different states. If there's no magnetic field, they're all equivalent in energy. If we put the magnetic field on, that degeneracy is broken and we've got a couple of states. For a spin one-half, we can have plus or minus a half that corresponds to spin up or spin down. And in the spin one-half case, these states have nicknames. You can call them alpha and beta. If it's not a spin one-half, then you have to use their full names. You have to write a ket that has, you know, so for instance, for spin one, you'd have 0, 1, sorry, 1, 1, 1, 0, 1 minus 1. And, you know, you can do this for any kind of nucleus. It's just, you have to remember that whatever the value of I is, you go from plus I to minus I in increments of 1. So here that is written down. Our z component of the angular momentum goes from plus I to minus I in increments of 1. And there are a few spin operators that we learned how to use in the context of NMR. And we talked about how you can use these things to generate pulse sequences and flip the spins. We're not going to get into, you know, I'm not going to expect you to know too much of that, but as far as how you use it at this point, I just wanted to introduce you to it. But you do need to know how to use a couple of these spin operators that we've talked about. So one is IZ. So when we, these states that we're looking at in the Zamen basis are the eigenstates of IZ. And so you should know that when you operate IZ on a state, you get its L value back. That's the eigenvalue. And then the eigenstate is that same ket. So for a spin 1 half case, if you operate IZ on alpha, you get 1 half alpha. And if you operate IZ on beta, you get minus 1 half beta. But again, the equation at the top here is the general definition of IZ and you should be able to apply that to any I value for your spin. And you should also be able to write a matrix representation of something like IZ in the Zamen basis. And so you do that by generating each of the matrix elements. And so the important things to know here is first of all, how to operate IZ on the states. And you do these things from right to left. So you operate IZ on the ket first. And then you take the overlap integral of whatever is left. And these things make up an orthonormal basis. So if the states are the same, the value is 1. If they're different, it's 0. And so you should be able to use that to generate a matrix representation for something like IZ. We also learned how to use the raising and lowering operators. So here this is just written down for the spin 1 half case. So if you operate I plus on alpha, you can't raise alpha anymore so you get 0. If you operate I plus on beta, you get alpha. I would recommend checking out, you know, on your exam, look at the general definition of the raising and lowering operators. So we mostly talked about this in the spin 1 half case. But on your exam, there's a general definition of them. And you should check that out and make sure that you know how to use it. And so again, we can write down the matrix representations of these things because we know how to operate the operators on the states and then we can take the overlap integrals of the states with each other. And so you should know how to write down these matrix representations. And when you do this on the exam, you should definitely show your work. If you don't want to write out all the matrix elements, at least write out a couple of them so that you show you know how to do it. So if you just write the answer, I'm going to assume that you put it on your cheat sheet and wrote it down and you won't get full credit. You get some credit, but you have to show some work or explain your rationale to get all of it. Okay, IX and IY can also be written in terms of the raising and lowering operators. This is something that you should know about the actual derivation of how you get this. We did in the homework and it's important. It's probably too long to do on the exam, but you should remember what these are. Okay, so now we get to what to the spectra actually look like. So that's something that you have a lot of experience here from organic chemistry to build on. The rules are the same as far as what the spectra look like and you already know a lot of that or you already did know a lot of it before starting this class. And the difference is now you know how it works or why the spectra look like that. So this is something that you should be able to do. If you have a molecule, you should be able to generate its NMR spectrum. And you know that should be true for any kind of nucleus that we want to talk about. The same principles apply. So whether it's proton C13, you know 31P, you know anything like this you should be able to generate what the spectra kind of look like. Same as last time, I'll give you a chemical shift table. So you just have to figure out what functional group is what and put things in the right general places. I'm not really worried about people memorizing the chemical shifts of different things. If you either become a synthetic chemist or an NMR spectroscopist and you really need to work with this all the time, you'll definitely remember it then. But for now you just need to be able to use the table. You should also be able to generate the coupling patterns for J couplings and you should be able to explain where these come from in a physical sense. And you know you should be able to draw spectra for different kinds of molecules. Basically putting everything in the right general location and figuring out the J coupling pattern. And you know again you should know what those things look like with and without decoupling a various nuclei. And then the last thing we talked about is statistical mechanics. And so we don't have time to review all of that because you know we're almost out of time and also we just went over it. But important things that you should be able to know is how to set up your partition function for an ensemble of molecules. You should be able to think about the most probable configurations of various states. So and for instance you should know that you don't pile everything into the ground state necessarily because it often doesn't have any degeneracy whereas higher states do have multiple ways to get the same configuration. You should be comfortable with how these configurations are written down and with Boltzmann distributions and how we get the relative populations of the states. These are all kind of good things to write down on your cheat sheet. You should know how to find the relative populations of two states or the population of a particular state relative to the whole ensemble. And you should also be able to write partition functions for various things. So here's a general case of a partition function and also how you write that down, how you write the relative populations in terms of that. And you should be able to do this. We've talked about some specific examples in class. So we've talked about rotational states a lot. So you probably want to know how to do that. We've also talked about vibrational states a bit so that's a good thing to know. We've also talked about the NMR case of this so that's another one where it would be good to know the specifics about that one. There are also these questions where you're given a description of the system in words and you have to write an energy level diagram and write the partition function. And so here's a case where people seem to get confused about this a lot where people look at this and say, well I don't know the value of J and the degeneracy is 2J plus 1 so how do I do that? Remember that's for the case of a rotational spectrum. And so you should know that the degeneracy of a rotational level is 2J plus 1. That's an important thing to know. But don't try to apply it to other cases. So in the case of the vibrational partition function, if we just have a harmonic oscillator which we're going to for anything that we talked about in this class, those states are all non-degenerate, right? You just have your parabolic potential and then you have all these states. So those are non-degenerate. For electronic states it's more complicated. You don't know what it is. But for this kind of a general system, we already told you the degeneracy in the problem. And so that's, you know, so don't get hung up there on, you know, wanting to use these other rules that you know for different systems. If it just says the degeneracy of state 1 is x and it's for state 2 it's y, then just directly write that down and use it and it doesn't matter where it comes from in that case. But you should know how to write these partition functions in a general sense for a system like this if it's described in words. And you should know how that changes with respect to temperature. So if you make the temperature really low or really high, you should know how that affects the relative populations and how it affects the partition function. And I think we are about done. That's what's going to be on the final. So thanks again for a really great class. I really enjoyed it. Thank you. Thank you.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 26. Molecular Structure & Statistical Mechanics -- Final Exam Review. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:04:15 The Canonical Ensemble 0:08:40 Point Groups: Flow Chart 0:11:27 Group Theory - Molecular Motion 0:16:05 Big Picture: Spectroscopy 0:24:58 Term Symbols 0:30:48 Selection Rules 0:33:41 Electronic Spectroscopy 0:34:54 Nuclear Zeeman Effect 0:38:48 Raising and Lowering Operators 0:40:17 Eigenstates and Eigenvalues 0:41:54 J-Coupling: Product Basis 0:42:59 Statistical Mechanics 0:44:13 Molecular Partition Function
10.5446/18933 (DOI)
that we've been talking about with stat mech is that if you have a group of systems that can occupy different states, the one that you're going to see the most of, the one that's going to dominate is the one that can be achieved in the most number of ways, which intuitively it sounds reasonable. And then furthermore we said that if you have a really large system like say Avogadro's number of molecules, then you're really going to see that dominate even more. So that's a little abstract. So I cooked up this little calculation here. And what the calculation does is we're dealing with a simple two state system. There's two states, heads and tails. All right, so we're talking about flipping a coin. But instead of just flipping one coin a bunch of times, let's say we have a box of quarters. We shake the box of quarters and then we take a peek inside and then we count the number that our heads are tails. And we report the number as a percentage of heads. All right, so you expect 50% heads to dominate, right? So here's our simulation. So right now I have it set up so that there are four quarters in the box and we shake it four times. So if we look at the results here, we've got, it looks like once we got no heads at all. So we have a 0% heads. We have a 25% heads. So that happened once. Once we got one head, once we got 50, 50 and once we got all tails. So we're not seeing what we expect to see, right? Because if you think about the number of ways that you can get these things, there's only one way to get all heads. There's only one way to get all tails, right? There's four ways to get one quarter on heads and four ways to get one quarter on tails. But then for 50, two heads, two tails, there's six ways to do that. So you should see the most of that. But we're not seeing that. So the problem right now, and by the way, this little program here, it randomly selects between 0 and 1 for each time. This isn't just a graphic. We're actually doing the experiment. So with our four coin flips, we're not seeing the distribution we expect right now. So we call this, we say this is an undersampled system. But we only shook it up four times, right? So let's shake it up a bunch more times. So let's say we got some poor grad student like myself to shake this box 3,000 times and count the number of quarters, right? So if we shake this thing 3,000 times, now we see this distribution. So looking at that, this makes a lot more sense, right? What's the shape? It's Gaussian, right? We're seeing a Gaussian distribution. So now we've sampled it enough. Turns out 3,000 times is enough times to shake this box to the point where we see what we expect to see. And if you look at these peaks relative to each other, we see the most of the 50-50 split, but there's still a good number that you have one head and one tail on either side of that. Because remember, there's four ways to get that and six ways to get a 50-50 mix. So it makes sense that they're pretty close because it doesn't have that much of a lead, 6 to 4, not that, you know, that's not going to dominate. And also, we still have a pretty fair showing for all tails and all heads, right? So we ran this experiment 3,000 times. The number's kind of small, so I'll read them to you. We have a, it looks like about 200 times, we ran the experiment, we got all tails, and another two high times we got all heads. And if you think about that, if you shook a box of four quarters and you got all heads, you wouldn't be that freaked out by it, right? You wouldn't be, oh, it's weird. That's not that weird, right? Because there's only four quarters in there. So let's play with this a little bit. So we'll leave the number of shakes at 3,000 so that we know we're sampling our system well for all of these times. But let's, let's up the number of quarters. So let's say there's eight quarters in the box. Let's not put eight in there. Now you still see a Gaussian distribution, right? And it looks like we actually still got a few that were all heads and all tails. Because that's a little more weird. Eight quarters all on heads or tails. But we did shake this thing 3,000 times. But if you notice, it does start to tail off by a little after maybe around 10% heads and 90% heads, right? So you're starting to get, our Gaussian's getting more and more peaked. So that's eight. Let's try, let's try 22. So now you're seeing, you know, it's getting more peaked even more. Now it's tailing off by around 30 and 70%. Let's try 222 just so I can keep hitting this two button. It's convenient. Okay, now look, 50% heads and tails is really starting to dominate now. So we've got 222 quarters in a box. We're shaking it up 3,000 times. So now we're, you know, I've got, got some real statistical significance here. And now it's tailing off by like about 45 and 55% heads. So now let's try 2,000. Let's see what that does. I'll do it with all twos. So on the order of 2,000. Okay, so now our histogram's starting to get really simple. Now we're not even sampling below 48 or 52. Like we're never getting that. And remember, this thing is totally random. Like it's not, there's no funny math going on. It's just selecting between zero and one a bunch of times. And as you can see, we're not even seeing those less likely configurations. So there, so now like the number of ways you can get 50% is on the order of thousands. Whereas there's one way to get all heads. So we're not going to see that. And if you think about that, if you shook a box of 2,000 quarters and got all heads, you'd totally freak out. Right? I would. So just to max it out, I set this thing to go up to 3,000. So it goes a little more. So if we're talking about an ensemble of quarters, you can see how like 3,000 sounds like a lot. Like that's a lot of quarters. But if we're talking about molecules, that's like nothing, right? Because Avogadro's number, we'd have to add 20 zeros to that 3,000 to get on the order of a mole. So if it dominates this much for a two state system with just 3,000 quarters, you can imagine that if you had a whole mole of molecules, how the most, the configuration has the most number of ways it would really start to dominate. So that's why I did this demonstration is, I tend to be an incredulous person. If you tell me this is going to dominate, I'm like, well, really, it still could happen. But sure enough, if you actually run the calculation, you really don't see those unlikely configurations happen. So hopefully you believe in that sort of thing now. All right. So let's get to the regularly scheduled programming. So if we're talking about, so this leads me into Lagrange multipliers. So the strategy with Lagrange multipliers is we want to maximize the number of ways, right? Because we just proved with our coin flipping experiment that the configuration which shows the highest number of ways to get something is one that's going to dominate. So if you, so that's the function we want to maximize, right? So that the formula for figuring out the number of ways of doing something is right there. It's one with all the factorials in it. But the thing is, factorials are really difficult to deal with mathematically. So there's a trick. We take the log of it. And the reason we take the log of it, a logarithmic function is something we call monotonically increasing. And you guys have seen the plot of a log function, right? It does that. It doesn't suddenly drop or start going crazy at the end, right? It just keeps steadily increasing, increases more slowly. But nonetheless, it's always increasing. Therefore, if you take the log of some function w, wherever that log function has a maximum, it's going to be the same as the maximum for our simple equation with the factorials there. So that's why we're allowed to take the log of it, because the max is going to be in the same place for both functions. And the reason we want to take a log, there's one more step, one more little trick. We've talked about Sterling's approximation. And what Sterling's approximation allows us to do is we can get rid of those factorials, because like I said, they're really difficult to deal with mathematically. So that's why we're actually maximizing the log of w and not just w, is because it allows us this trick to get rid of the factorials. And the maximum we find is going to be the same maximum that we would have found had we done it the hard way. So it's a nice little trick to do that. And this is how the partition function gets derived. So we're not looking for a global maximum. We have constraints, right? Like so, with the coin flip, there was no energetic constraint type thing. But for our molecules, we're imposing two constraints. The one is that the total number of molecules never changes. And the other is that the total energy of the system never changes. So because of that, if we add up the number in each state, we should get the same number every time. I.e., if you have, you know, three more in one state than you had before, those three had to come out of another one, right? We're not just producing molecules out of thin air. In the same way, we don't lose any molecules. And the same thing happens for the energy. If you add up all the energies, if you add the energy of each individual molecule, the total needs to remain the same all the time. So those are our two constraints. So since we're maximizing a function with two constraints, the best way to do it is with the method of Lagrange multipliers. So here's how we do Lagrange multipliers. So the basic trick with this, if you wanted to maximize a simple one-dimensional function, you take the derivative of it and figure out where that function has a zero derivative, right? Because that's where it's either a maximum or a minimum. It's an extremum. So if you have a function of multiple variables, then you're going to do the same approach, but you're going to take a partial derivative with respect to all the variables and add them up. And that should be zero. And that's where your maximum is. So that would be for a global maximum. But we're interested in a maximum with respect to some constraints. So the way you do that are these three equations at the bottom. They're just for arbitrary variables x1, 2, and t. The way I prefer to think about it, I like to move the term with the lambda in front of it to the other side and think of it as the derivative of the function is equal to lambda times the derivative of the constraint. And that just resonates more with me. But it's the exact same thing. And the rationale behind this is it has to do, I won't get into it too much because we just want to use it. We don't need to prove that Lagrange multipliers works. We'll leave that to the mathematicians. But basically, if you had like a normal vector coming off of both surfaces, the point that satisfies these conditions is when they're parallel to each other. So if the vectors are parallel to each other but not necessarily the same length, the lambda is what you multiply the one by, the factor to make them the same length. And then that's where these derivatives are equivalent. So they're equivalent in kind of their angle, but to make them the same length, that's the multiplier. And that's what links them all together. So the lambda kind of ties everything together. And we can actually use it algebraically to help solve these things. So it'll make a lot more sense if I do an example. So first example, let's say we want to make a play area for a puppy. But we only have 40, we have 40 feet of fence material. So we want to give the puppy the most amount of space to play, right, because we're nice people. So we want the most area with a set perimeter. So we can only do so much with the perimeter. And another thing is that we've decided we're making this thing rectangular by the way that we've defined the area, right? So the area is just length times width or x times y. And the perimeter is locked in at 40. But to get the perimeter, you just add the two x sides and the two y sides, and that gives you the perimeter. So we have a function we want to maximize with respect to a constraint. So we apply our method of Lagrange multipliers. So the first thing we want to do is take all of these partial derivatives. So you take the partial derivatives of the function and the partial derivatives of the constraint. And they're nice, easy functions to work with so we can do all these in our head. And now remember what I said, that the derivative of the function with respect to some variable is equal to lambda times the derivative of the constraint with respect to the same variable. So you're going to have y equals 2 lambda and x equals 2 lambda, right? So good, got that right. So we have our x max and y max equal 2 lambda. So then we can plug this into the constraint. If you plug that into the constraint, the only variable left is lambda so we can solve for lambda. So lambda we figure out is 5. And now we can plug that back into the 2 lambda and we see that x equal y equals 10 feet. So the biggest area we can get for the dog with 40 feet of defense material is a 10 by 10 square. Which you probably could have, you know, you might have guessed that without doing Lagrange multipliers. But don't overlook this example when you're studying because it's nice to look at something really simple and just focus on the technique. So yeah, don't dismiss this sort of thing. They could be really useful study tools. So yeah, 10 by 10 square. Alright, now we're going to do a little more complicated. So now we have something called the paraboloid and that's that yellow surface. I tried to make it UCI colors. It looks a little greenish though. So the paraboloid is that dome looking thing. And then we have a constraint of this, the plane, the blue plane. So the idea, we want to find the highest point on this function so that f of x, y is the yellow function and the constraint, the y minus x is the blue plane. So if we wanted just the global maximum of that paraboloid, we'd just walk to the top of the mountain and we're there, right? But the problem at hand is that we have to walk in such a way that we're touching both the plane and the paraboloid and then find the highest point, right? So it looks like it's somewhere off to the left, right? So just kind of looking at it, we know it's going to be somewhere around there. But let's do the math. So the first thing we want to do is take the two derivatives of the function. And so we have those there. We use the chain rule. That's how you get the two out front. If you forgot the chain rule. And then if you solve that for x and y, well then now we have the global maximum, right? So the top of the mountain is at the point three six. But that's not what we want, right? We want the top of the mountain with the assumption that we have to be touching the plane as well. So that means we need to take the derivatives of the constraints as well. So when we take the derivatives of the constraints, it's just negative one in one. So then remember it's going to be the derivative of the function equals lambda times the derivative of the constraint. So we set that up. And as you can see from this, they're both now expressed in terms of lambda. So since they're both in terms of lambda, we can take the one on the left, multiply it by negative one, and then they're both equal to lambda. And then if they're equal to lambda, they're equal to each other, right? So we can eliminate lambda and express it this way. So now looking at that, we could rearrange that and say that x plus y equals nine, right? So we know x plus y has to equal nine, and that gets us almost the whole way there. But then if we look back at our constraints, we realize that y minus x equals zero, or y equals x. So if y equals x and y plus x equals nine, then they both have to be 4.5. Everyone good so far? I feel like I'm going fast. Yeah. That's given. Yeah. Yeah. All right. Good. So what's next? All right. So those two examples were both the method of Lagrange multipliers given that you have one function to maximize with one constraint. But for our partition function, remember we have two constraints. You have the total number has to remain the same, the total energy has to remain the same. So you can do the method of Lagrange multipliers with as many constraints as you want, but the more you add, the harder it gets to do. And it doesn't really lend itself well to lecture because it's one of those derivations that takes so long by the end you forget what you're doing in the beginning kind of a thing. But it works well for reading, and there's actually a really good one at the end of your chapter in the further information section for the chapter on the Boltzmann distribution. So check that out. It's a good read. I like it. I don't know. So this is the process you would do. We're not going to actually do it, but this is what the process, and it's in the book where we have all these constraints. And you see that you subtract them. Well, you know, I like to put on the other side of the equal sign, but we are subtracting them, right? But in this case, you just subtract the two constraints, and each one has its own Lagrange multiplier. So we have a lambda and a beta. And that beta constant actually does end up being the beta constant that you're familiar with, right? The 1 over kT. So that's all we'll say about actually deriving the partition function, but why don't you understand kind of where it comes from so that, you know, that e to the minus beta times the energy. It all comes out of this. So, yeah. This is what? F is the largest normal vector. Oh, in the bottom left? F is the function. The extremes. I'm sorry. I'm not really following it. We'll talk after. But yeah, I mean, don't think too hard about the thing I was saying about the parallel vectors, because we're mostly interested in applying this thing. But I just kind of looked it up myself because I was curious and read a little bit about it, and that's what it boils down to. But keep the focus on applying it, especially for this class. But I'm glad you're curious about it. You should definitely look it up. But I can't talk about it now. So we've got, so here's an example of the plots of a few partition functions. So if we're interested in knowing the average speed of an atom of these particular noble gases, if you plotted them, they would look like this. And if you notice that the helium one is a lot faster than everyone else, right? And if you notice the correlation here is the lighter it is, the faster it's going, right? And the reason for this is that temperature is tied to the kinetic energy of these gas molecules flying around in three dimensions, right? So if the kinetic energy relates to temperature, but kinetic energy is two pieces, it's 1 half mv squared, right? So there's mass and velocity both tie into it. So basically if these four gases are all the same temperature, then they're all about the same kinetic energy. So if they're all about the same kinetic energy and one has less mass, then it's going to go faster, right? Because it's 1 half mv squared. So that's why you see it. It's just kind of qualitatively. So let's talk about finding the average kinetic energy of a gas molecule. So there's our equation for kinetic energy, in case you forgot it. But down here we have a distribution here. And this distribution represents the probability of finding a molecule in a particular energetic state. So it's a population, right? So basically you have the state you're interested in over all of the possible states. And so it's kind of like taking a fraction, right? It doesn't look immediately like a fraction because you have an integral on the bottom, right? But if you remember an integral is a way of approximating a summation, right? So if you have a summation and you're adding all these things together, an integral is a way of approximating that, right? And you can think of, you know, if you have the area under the curve and you divide it all up into a bunch of little boxes and you find the area of each box and add them up, that's the summation, right? So the function will give you the height of the box. So you have the function times the change in the x-coordinate, which is your dx, right? So it's right there when you take an integral. You're really just taking a bunch of areas and adding them together. So that's why it doesn't immediately look like we have the sum of a bunch of different possibilities on the bottom and one possibility on the top. That's what this is. But we can simplify it a little further. So that function on the bottom there is just the integral of a Gaussian, which is a very well-known integral. So there's the rule if you see the a there that you're constant. So in this case, our constant is just one-half beta m, right? So then that's how we go from the equation on the left to the one on the right, is that we just perform that integration and it's a constant. So then that goes out front. So now this is going to represent our population in a more concise way. All right. Now let's say, so we're interested in the average kinetic energy. So you'd think, all right, we need the average velocity to get that, right? Because we know the mass, it's going to be a constant of whatever we're dealing with. But we need the average velocity to figure out what our average kinetic energy is. But if you take the average velocity, say in one dimension, so we're just in the x direction, you've got an equal probability of going right and left, right? So whatever distributions of speeds you have going right, you're going to have that same distribution going left. So the average of all of them is just going to work out to zero. So we can't do that. So instead, we're going to take, instead of the average velocity, we'll get the average of the square of the velocity, which is called the variance. And you guys should be familiar with the variance. And if you're not, you're definitely familiar with the square root of the variance, which is standard deviation, right? I think people like to ask about after there's a test, right? So we're looking for the expectation value of the velocity squared. Now what's important to notice about this integral here is you've seen this type of treatment before when you took quantum mechanics, right? Because quantum mechanics, your wave function, when you multiply it by its complex conjugate, gives you a probability distribution, right? That's what a wave function is. When you multiply it by its complex conjugate, it gives you the probability distribution of where the particle could be, right? So if we're interested in the average position, we took our position operator, which in one dimension is just x, and you sandwich that in between the wave function and its complex conjugate, right? So you're essentially multiplying the thing we're interested in by the probability density. And we're doing the exact same thing here. It's just that we start with a probability density, so we don't need to multiply anything by its complex conjugate, but it's the exact same process. And then you integrate it over all space that you're interested in. So that's what we're doing. And it's essentially a weighted average, right? So going back to how an integral is really the limit of a sum, it's kind of the same thing in that this variable, we multiply it by whatever that function is at that point, and then you keep adding them together. So what you have is a weighted average. And we'll do an example later where we don't need an integral because there's a small enough amount of states that we can actually just take a standard weighted average. So keep that in mind. Okay. So that's why we set up the integral the way we do. So now let's take the integral. So if you notice, v squared is an even function, and the Gaussian is an even function, so you have even times even is even. So trick with even functions is you don't need to integrate the entire thing, right? You can integrate only to the right of zero and then just double it, right? Because whatever area is on the right side, the same is on the left, property of even functions. And also while we're talking about that, let's think about, remember how I said with the velocity, just qualitatively, we can't get an average velocity because it can go right or left. If we look at this thing mathematically, if we just put a v there instead of v squared, v is an odd function. So you got odd times even. So if you take that integral, you get zero. So the math works out as well. It matches our intuition in that your average velocity is going to be zero, which is one reason why things like variances, standard deviations are so important because sometimes you can get more information from that than you can from an average. Average is just kind of the go-to thing because it's easier to understand. So if we solve this integral, it's also a well-known integral, it simplifies down to 1 over beta m. So the average velocity only depends on 1 over beta m. So we are out to get the average kinetic energy, right? So that's one half mv squared. And if we have the average value of m of v squared, we can just plug that right in. So the average of the square of velocity is, sorry, the average kinetic energy works out to be one half kT, right? So it's nice and simple. It just works out to one half kT. But since not too many experiments are run in an infinitely long tube of infinitely small diameter, right, because that's what we'd have if we had molecules going in one dimension, we need to worry about the other two dimensions, right? But this is an ideal gas. So we can work on the assumption that the velocity in the x, y, and z are not correlated with each other in any way. So if that's true, then we can treat it as a linear system, meaning we just add them up. So if you're interested in your average kinetic energy in all three dimensions, you just add it up for x, y, and z. So instead of one half kT, it's just three halves kT. We just multiply it by three to account for all three dimensions, all right? So that's how you figure out the average velocity of an ideal gas molecule. Okay. And so as we showed, the average velocity or the average speed is due to the hotter it gets, right? So this is just a, this just shows a few Boltzmann distributions for different temperatures. And you see that as it gets hotter, you know, 2,000 degrees Celsius, the average is, you know, the center of that distribution is much further to the right, and it's more spread out. And it spreads like that because, you know, more states are energetically accessible, right? All right. So now we'll do another example. This is a cool one. So for this one, let's say we're exposing some paramagnetic substance to a magnetic field, right? So you have all these magnetic dipoles that can point in any way, right? And when you put a magnetic field on it, they try to align with the magnetic field. So then there's a statement, heating a magnet makes it lose, makes it lose its magnetization and y. So if we think about the situation of like a permanent magnet, like something on your refrigerator, that's probably fair or magnetic, but we'll go with it anyway. If you've got a field that's trying to align all the dipoles in the one direction, then heat is just kind of somebody walking around just kicking them all over the place. So the idea is heat. So Richard Feynman, one of my favorite scientists in history, used to talk about how atoms jiggle, right? So if you have any substance, the molecules, the atoms, they're all jiggling around. And the hotter it gets, the more they're jiggling. So the same thing happens in a substance that has magnetic dipoles. But the dipoles get kicked around as the atoms get kicked around. So the hotter it is, the more you're kicking these things around randomly. So you've got two competing things. You've got a magnetic field that's trying to align the dipoles in one way, and then you've got heat just kicking it around randomly. So that's why you can make a magnet lose its magnetization by heating it up. Because with a permanent magnet on the fridge, what it is is you expose it to a field long enough, and they all kind of just get stuck that way. But if you heat it up, then you're kind of giving them energy to jiggle around all over the place, and then they're randomly aligned. Yeah? Is that why when you cool a magnet, it gets stronger? That's a good question. I don't know about this effect. Do you think the dipoles are even more? To be honest, I don't know. I'm not going to try and fake an answer, because I'm not sure. I'd have to look that up. But I know with the superconductors that you can have like, you know, electromagnet often have to run really cool. But I don't know about just cooling a permanent magnet, and if that increases the field, I've never heard of this, but that doesn't mean that it doesn't exist. So we're going to approximate the system as a, what do you call it, a two-state system, right? So we've got all these dipoles. They could point any which way, but we'll just say, and it's a good approximation, we'll just say that all these dipoles are either pointed with the field or against the field, right? So then if it's with the field, we'll call it parallel. If it's against the field, we'll call it anti-parallel. So which one do you think's higher energy? Anti-parallel or parallel? Good, anti-parallel, because you want to think about potential energy, right? So if you have potential energy, it's pushing on it, it wants to pop over, right? But once it's popped over, it's not going to do anything else, right? So it's like dropping something, potential energy, right? If you already fell on the floor, you can't fall any further. So same kind of thing. So let's see here. So the energy, for the ground state, we're going to call that negative mu naught B naught. And then if it's anti-parallel, then it's going to be the same unit, but positive. So therefore the energy difference is going to be two mu naught B naught. So if we want to, so there's our energy difference. So let's try and find the partition function. And remember, so the partition function was in that previous example, that one on the bottom where we had that integral of all the possibilities. But for this particular system, there's only two possibilities. So we don't need an integral to do it, right? All we need is a weighted sum of these different states. And these states that we add up, and if you take one thing out of lecture today, don't forget this, it's going to be the degeneracy of that state times e to the minus beta times the energy. So that's how we got the one and the e to the minus two beta mu naught B naught is because, so the ground state we always call zero. Because we can establish our energy scale anyway we'd like. And then later if you need to correct it for zero point energy, you can always add it back in. So you always call the ground state zero energy. So we have zero energy, and the degeneracy of both of these states, there's only one of each state, right? So the degeneracy is just one for both of these. So you're going to have e to the zero is our ground state, so that's why we have one. And then you're going to have e to the minus, or e to the minus beta times zero, so e to the zero. And then you're going to have one times e to the minus beta times the energy, the amount of energy we've raised, which is two mu naught B naught. So this is our partition function, q. Now, from the partition function we can figure out the population of each state. So remember how I said it's a fraction, right? You have so many, you have the one particular state you're interested in over all the possible states. So that's what we have. So for the ground state, the first term it was just one. So you have one over q gives you the population in the ground state. And if you're interested in the population in that excited state, that's going to be the second term. So e to the minus two beta mu naught B naught, and then you divide that by your partition function. So those are our populations. So now if we're interested in the average magnetic moment, the way we're going to do that, it's going to be a weighted average again. So the weighted average is going to be the magnetic moment times its population in that particular state. And so this is our weighted average. If you had me in discussion, you've seen this with, you know, if you have a certain number of people in the room all of different ages and you want the average age, you'd add up the number of people, say like five 12-year-olds and 20-20-year-olds, right? You'd have five times the 12 and then you add that to the however many 20-year-olds and times the 20 and then you divide that by the total number of people. That's a weighted average. It's the same thing here. So if we've got that now, so for the first one, for the magnetic moment of pointing with it, so that's positive mu naught, you just multiply it by the population that's in that state, which would be the P naught up there. And then the state pointing against it, we're calling that negative mu naught, then you multiply that by the population in that state, so that's the P1. And then all the way to the right, we've just expanded that. So if we want to get fancy, we can realize that this is the definition of the hyperbolic tangent and you can write it that way. But that's not as productive. Instead we're going to make an approximation here. And approximations are okay, but they have to be justifiable. So the way we're going to justify this is let's say that we have a weak magnetic field or a high temperature. So remember we have these dominating forces. We have the magnetic field that's trying to get everyone to line up evenly and then you've got the heat that's just kicking everything all over the place randomly. So we're saying that the heat is winning basically. And you can do that with just a really weak magnetic field or you can do it with really high heat. You can do it either way. So the important thing is that the ratio is much less than one. So if this is much less than one, then the strategy we're going to use is doing a Taylor series centered at zero. So if you remember with Taylor series, the idea is your Taylor series is always centered somewhere. And wherever it's centered, the closer you stay to that point, the more accurate your Taylor series is, right? And if you're staying really close to that point, then your Taylor series approximation can be pretty good with just a few terms. So that's kind of the approach that we're taking here is that we can expand this to a Taylor series and only use a few of the terms because we're so close to zero and we're centered at zero. So technically that's a McLaren series, right, if we're centered at zero. So looking at this first expression in the parentheses, and remember this came from the numerator of that fraction, right? This fraction in the bottom left here times the mu knot, that's what we're working with. But we're finding an approximation for it. So the Taylor series for e to the x is just 1 plus x plus x squared over 2 plus x cubed over 6, right? So it follows that pattern. But we're just going to take the first two terms and call it good enough, right? Because we're really close to zero. So the way we're going to do that, so the first two terms of this, imagine we take that exponential function right there, it would just be 1 minus, so we'll call that entire thing that 2, the minus 2 beta mu knot b knot, that entire thing we're calling x. So then the first two terms will be 1 plus x, or in our case 1 minus 2 beta mu knot b knot. So those are our first two terms of the series and we're going to stop at two terms. So now if you distribute that minus sign in there, then you get 1 minus 1, so the ones go away and you're just left with 2 beta mu knot b knot, alright? So that's the approximation for the numerator. Now if we look at the denominator, we'll do the same thing, take the first two terms, so you're going to have 1 minus 2 beta mu knot b knot, but this time the one's positive, so you get 1 plus 1 equals 2 and then you subtract the 2 beta mu knot b knot. So now we can make even one more approximation because up top we said that our beta times mu knot b knot, remember beta is just 1 over kT, is really small, it's much, much less than 1. So if that's true, then we're subtracting a really tiny number from 2, we don't really care if it's 1.99999 something or if it's 2, right? We're just going to say it's 2 because it's so close to 2. So we're going to approximate the denominator as just being 2 and then if you write that out with this fraction, this is what you get. So the expectation value for the magnetic moment for this particular substance in a weak field at high temperature is just mu knot squared b knot over kT, or times beta, however you want to say it. So this is known as Currie's law, whoa, this is known as Currie's law and here's a plot of it and you know at high temperature populations tend to equalize and this kind of confirms what we were thinking about just kind of qualitatively, right? If temperature is just kicking these dipoles all over the place and our magnetic field is too weak to try and keep them aligned, you're not going to see much of a neck magnetic moment, right? So that's what we have here. Any questions on that? Okay. We're just blowing through this stuff. Am I going too fast? Am I? Is there anything you want me to back up to? No? Well if we get to the end of the way of time left, let me know if there's something you want to rehash and I'll do it. From the northeast too, we tend to talk fast so I don't know. Alright, so now we're doing an example kind of near and dear to my heart, a protein folding example, right? So we talked about like toy problems, right? So like in quantum mechanics a toy problem would be like a particle in a box or a harmonic oscillator, right? So now for protein folding, right? So when proteins are synthesized in the body on the ribosome, as they come off the ribosome they fold in all sorts of different ways because you know the different amino acids are attracted to each other in different ways. And so they kind of fold up into this shape and do so really consistently and that's what gives them their function, is their structure. So it's a really fascinating area of research. So you can kind of look at this as a toy problem for protein folding. So we've got six beads, like a string of six beads, and we're going to say that these beads are all attracted to each other. So we'll say like kind of like Van der Waals forces. So these beads are all attracted to each other and they can bend around in different ways and possibly stick to each other. So we're going to look for a partition function for this thing and figure out the populations of these different states. So as an approximation let's say that these things can only snap into place at right angles, right? So if the only bending this thing can do is at right angles there's 21 different ways you can configure this. And you can thank Dr. Martin for drawing every single one of those things. That must have taken forever. So you have all these states here. So there's 21 different ways you can bend this thing at right angles without having anything touch. If you allow two of these beads to touch there's 11 ways to do it and if you allow two contacts to happen there's four ways to do it, all right? So these are, these are our micro states. So when I say like there's four ways to do it, 11 ways to do it, 21 ways to do it, what am I talking about? It starts with a D. Degeneracy. Good, good, good. So we're talking about the degeneracy of these different states. Because energetically we're going to say they're the same, right? They're the same state energetically. We're only showing three different energetic states but the first, they all have different degeneracies. There's more than one way to do it. So now let's try and figure out which, which one do you think is the highest energy state? What do you think is the lowest energy state? How about the bottom, the one with the four microstates? Do you think that's the highest or lowest energy? Lowest, good. And again we're thinking of potential energy thing here. So if these things are attracted to each other and then they collapse together, if we look at the one with the four microstates, it's already collapsed, right? It can't collapse any further than that. So that's going to be our bottom one. That's going to be like our ground state. And then, so then one contact would be the first excited state, I guess you could call it. And then the highest energy state is the one with the 21 microstates there. So everyone good with that? So let's try and find a partition function for this thing. Okay, so first of all, this is the energy we were talking about, right? Energetically they're all the same state. These, these, there's only three different energetic states but they have the degeneracy. And we're, we're looking at it again kind of a linear relationship between energy and number of contacts. So the less contacts there are, the higher energy you are. So that's why we just kind of went linear relationship here. If you look at the coefficient in front of E naught, it goes zero, one, two, right? So here's our partition function. So looking at the ground state, there's four different ways to make that ground state. So our degeneracy is four. And remember we always call the bottom energy level zero. So then you have E to zero which is one. So our first term is just four, right? Because degeneracy four times E to the zero is one, four. Second one, there's 11 ways to do it. And our function is, the energy for that particular state is E naught. So it's E to the minus E naught times beta. And then the last one, there's 21 ways to do it. And the energy we determined was two E naught. So that was going to be 21 times E to the minus two E naught times beta. All right? So remember this for writing these partition functions of these smaller systems that we can actually write out every term of the partition function. So typically things are too complicated for this to get them all. But we can have these toy problems like we do in quantum mechanics to look at that. And you can have a partition function of, so like the one with the, that we determined the average kinetic energy of the gas molecules, we're talking about translational energy, right? Which is a really good way to describe the energy of those atomic gases, right? Because the atoms are considered points as far as the mechanics of it go for the most part. So all you're interested in is translational energy. But you can have a gas that's like big and it can rotate, right? So then you have rotational partition functions as well. So there's all kinds of these partition functions you can come up with. Yeah? It's equivalent. So beta equals one over kT. I prefer to always use beta. Dr. Martin was saying this, that experimentalists tend to favor one over kT because temperature is a more natural unit if you're running experiments. And theory people tend to favor beta. So maybe that's why I favor beta. I don't know. So, yeah. And I think I just out of habit called it beta every time. But it's equivalent. Any other questions? We have almost a little over five minutes. If there's anything you'd like me to do again, if not, we can call it a day. It's up to you. All right. Thanks. Thanks, guys. Thank you.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 25. Molecular Structure & Statistical Mechanics -- Partition Functions -- Part 3. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:01:02 Simulation 0:06:38 Langrange Multipliers: Motivation 0:16:30 Multiple Constraints 0:38:36 Curie's Law
10.5446/18931 (DOI)
I have a couple of announcements to make. One is your exams are all graded and they've been sent for scanning. The average is about 64, so great job. That's a lot better than last time. So I think you've been working really hard and I'm happy that everybody's learning material. Other announcements. Next week is last week of the quarter. That means it's the last PCEM seminar that you can go to for extra credit. So if you want to do it, that's the last chance. And again, those are due a week after the seminar. So that'll be the last one. People have been asking me about the final. It is definitely cumulative. Everything is going to be on it. It's two hours, so it'll be long. Does anybody have any more questions about stuff like that? Logistics, the final? Yes? Will the final be about the same or will it be the same? Yeah, the final will be pretty comparable to things you've seen before. I don't think there will be any surprises. Obviously, it'll be longer because there's more time and it has to cover everything, but otherwise I think it'll be pretty comparable. Yes? How many cheat sheets? How many cheat sheets? One. The same as before. Yes? Will it be equally distributed between every degree learned or is it going to be heavy on the stuff we're about to learn? It's going to be pretty equally distributed among everything that we've seen this quarter. And I'm going to tell you why that is. So PECAM is really hard and it takes some time to understand some of the concepts and not everybody gets it right away. I didn't hear that. But so what really matters to me at least is what you know at the end. And so if you did really well on the final and you have happened to stink it up on one of the previous exams, then that will be taken into account. So that is why the final is completely cumulative. There's sort of equal amounts of everything we've learned. And you know, if there's a distribution of what's worth how much and that's the default situation, but if there's a situation where somebody did really, really poorly on one of the previous exams and much better on the final, then that will be taken into account. This, you know, I pretty much always do this in PECAM. It usually affects a small but non-zero number of people's grades. So, you know, mostly it's not, it doesn't make such a big difference, but sometimes it does. So if you had a problem with one of the previous midterms, I really encourage you to make sure that you understand what you did wrong and be able to do better on the final. All right, any more questions? Okay, let's start talking about statmec. All right, so in a way this is a really big departure from the things that we've been doing. So, so far in this quarter of PECAM we've talked about some aspects of molecular symmetry and how this relates to spectroscopy and how, how spectroscopy works in all its different forms and how we can use that to get structures of molecules. And the whole emphasis so far has really been on individual molecules and how we can determine their properties. Of course, we're talking about bulk techniques. Usually when we do spectroscopy, we're not looking at a single molecule. We have a whole ensemble of them. But we're ignoring that fact. We're, and we're fundamentally concerned about properties of one molecule, whether it's geometry or electronic structure or vibrations and rotations, things like that. And now with statmec we're going to make the transition to talking more about ensembles of molecules and properties that have to do with what happens when you get a whole group of molecules together. Microscopic is a really important sort of bridging topic because this is what makes the connection between all of these individual level properties of molecules, the microscopic picture that we've been learning about and the macroscopic properties that you know from thermal. So, this is, you know, hopefully a topic that, that helps to make some connections between how do we know this stuff on the microscopic level and how can we use it to tell us something about the bulk properties of, you know, things that, that we get from thermodynamics. Okay. So, let's start with relating it back to a topic that we've been talking about recently which is NMR. So, in our whole discussion of NMR, I have mentioned many times that the population difference is really small. When you put your sample in a magnetic field, that breaks the degeneracy of the spin states and it induces a population difference between spins that are aligned with the magnetic field versus against the magnetic field. And I keep saying that this population difference is really small and that's why we need a big magnet if we want to see a stronger signal. Let's look at what else that depends on and see if we can quantify what that energy difference is. So, so far we just know that it's not very big. So, here's an expression for the actual energy difference. So, the big ends here are numbers of molecules. So, this is the ratio number of spins in the state beta over the number of spins in the state alpha. And it has a pretty simple functional form. So, we have this exponential. It depends on H nu sub i. That's its resonant frequency. So, that's the Larmore frequency of our nucleus. And then in the denominator, there's Boltzmann's constant and the temperature. And so, that's it. This really simple expression tells us about how it would be more quantitatively what's the population difference between the spins. And the extra alpha spins are the ones that make up this magnetization vector that's pointing along Z that we've been talking about manipulating in terms of NMR. And so, this is why we say that in a normal equilibrium population of spin states, most of our spins aren't giving us anything. We don't have very much of an excess of alpha over beta. Now we can quantify that. And that also gives us maybe another parameter that we can change in order to increase the sensitivity. So, we know that this depends on the energy difference, which of course depends on the Larmore frequency. And there's a dependence on the magnetic field in there. So we can increase the magnetic field. We could also lower the temperature. Of course, the problem with doing that in a practical sense is that to really change the Boltzmann populations in an NMR sample, we have to make it really, really cold. We have to get down to, you know, Millie Kelvin to really make a very big difference in the populations. So it's typically not the most practical thing to do, at least in chemistry experiments. One thing I just want to mention as a little aside is, so people who have done NMR in the context of organic chemistry, have you heard of a cryoprobe for enhancing sensitivity? So that's a cool NMR instrument that people want to have. And a lot of times, people who are using NMR as a technique see this and they say, okay, we have a cryoprobe for enhancing sensitivity and they think that what that's doing is cooling down your sample and changing the Boltzmann factor. It's not. What that's actually doing, your sample is still at room temperature. The electronics of the probe are cold. And so that's just reducing thermal noise in the electronics. So that's how a cryoprobe works. It has nothing to do with this at all. It's just increasing the sensitivity of the instrument by reducing thermal noise from random motions of electrons in the electronics themselves. Okay, not so important to stat mech, but I just wanted to mention it because it's something that comes up. Okay, so here's an expression for our magnetization vector in terms of the Boltzmann constant with all of these factors in there. So we have N, which is the number of spins, of course that's going to be important. If you want a larger signal, you can put in a bigger sample or a more concentrated sample. We have this factor of gamma squared H bar squared I times I plus 1. I is the quantum number of the nucleus. And then again, we have this Boltzmann constant and the temperature in the denominator. So this is kind of our first view of an example of what do these population differences look like. And I'm going to sort of go back and forth between talking about fundamentals of probability distributions and things like that and also showing examples of kinds of things that we've seen before and then we'll tie the two together at the end. Okay, so here's what we're really talking about with StatMAC. And let's make an analogy to our previous discussions last quarter and this quarter of single molecules. So if we're talking about single molecules on the microscopic level, the thing that tells us everything we need to know about that is a wave function. So as we've seen, there are all different kinds of wave functions. I mean, so you first learned about them way back in general chemistry in the context of electronic structure. But we've learned that there are all kinds of vibrational wave functions. There are NMR wave functions, some of which don't even necessarily look like a function in a traditional sense so much. But this is the thing that describes what the properties of the molecule look like. And in quantum mechanics and spectroscopy, that's the quantity that tells us everything we need to know about the system. In statistical mechanics, the thing that we're interested in is called the partition function. And the partition function applies to a macroscopic ensemble of molecules. It tells us about how the energy is distributed among different degrees of freedom in the whole system. But you can't have a partition function for one molecule. It's something that is an ensemble property. However, it is tied into what the individual molecules are doing via probability concepts. OK, so the partition function is going to be what tells us the thermodynamic information about an ensemble of molecules in terms of what's going on with the individual molecules. So let's look at what we mean when we talk about an ensemble. So the ensemble is a system of N molecules, where N is going to be really large and typical samples that we're going to look at. And it has some total energy, which we can call E. And again, as you've seen in general chemistry and thermo, just out of curiosity, how many people have taken thermo, either in physics or engineering? OK, quite a few. But everybody's seen the basics in general chemistry. Remember the kinetic molecular theory of gases and how this relates to the ideal gas law? We know that if we have an ensemble of molecules, even though they're all identical, they don't all have identical kinetic energies or velocities or anything like that at particular points in time. There's always a distribution. And so all kinds of these thermodynamic properties that we're interested in, like enthalpy and the temperature and things like that, are all dependent on the distribution of energies. And our distribution doesn't look like a delta function. It has some kind of a spread out shape. And again, from looking at the kinetic molecular theory of gases, we remember that if we reduce the temperature of our sample, we get a sharper peak, so we have more of the molecules in the minimum energy or the maximum energy state. And we have more of the molecules in the maximum likelihood state. I should be careful there. And also we have fewer that have higher energy, whereas if we make more energy available to the system, the distribution not only shifts to higher energy, it gets more spread out. We have more diversity of states going on. We're going to look at that in more detail later. Another piece of information that is important to all of this is that collisions are important to redistribution of energy. So that's how molecules change their state. They run into each other, they run into the walls of the container, and that's how energy gets redistributed. That's important because one of the things that we're going to look at is a lot of times in stat mech, we make the assumption that if we take a snapshot of a whole huge ensemble and look at the states of all the molecules and the distribution of that, that that's equivalent to watching a single molecule over a really long time occupying all these different states. And so you do need collisions to be able to redistribute energy. Okay, so let's talk about our ensemble in a little bit more formal terms. So we have our system of N molecules and the whole thing has energy E, but what are the individual molecules doing? So stuff is going to move around, it's going to change with time, but on average there are N sub i molecules in some state epsilon sub i. How many states are there total? That depends on the specifics of the system. We're going to see some more specifics later on. But however many of them there are, the total energy E is just going to be the sum of the individual energies. Of course weighted by how many molecules are in each state. And the total energy that we have can be partitioned among the various states. And that's why this thing is called the partition function. And for any particular system the lowest energy state is epsilon naught. And we generally define that to be zero and measure the other states relative to it. Because that makes stuff easier to deal with. Is it really zero? No, it's the zero point energy of the system. But we look at the energy of other states relative to it. So what does this look like for some realistic systems? So here are some proteins that have some different conformational states. And if you look at these little free energy diagrams associated with them, there are different conformational states that the proteins can occupy. They're a little local minima. And there are barriers between them. These are some, this is just some examples of different states that have different energies that molecules can occupy. And as you can imagine for something like a protein, that's going to have serious consequences for the function of the molecule. Some of these conformational states are going to be more active than others. And if you have everything frozen into a low energy state that is not active, then the molecule isn't going to be as active as if you have more energy available to mix among these states. Here's another picture of that showing for some of these particular minima in conformational states, which of these conformations are active, inactive, or partially active? So this is a good reason why we might want to know something about the distribution of energy that's available to the system and how the molecules are partitioned among the different states. It's a lot more general than this. We can do a lot of things with statistical mechanics. Just one practical example. Okay, so if we have our ensemble of molecules, the lowest energy state is the zero point energy, which we're going to define as zero. And then the set of populations in each of these states is the number n for each of the states summed over all of the possible states. And of course the number of molecules in all the states when we add them all up has to equal the total number of molecules in the system. So if we take our system of a bunch of molecules and take a snapshot so we get an instantaneous picture of what's going on, then there are n sub zero molecules in the lowest energy state, there are n sub one in e sub one, et cetera. Again what defines this? It's going to be some sort of Boltzmann distribution based on the relative energies of the states, but we haven't quite gotten back to that yet. So we need to write down our instantaneous configuration. And here this is just notation. So this is how we write down how many molecules are in each of the states. And this is an important thing to be able to do because the probabilities of the states are going to come into account here. So a typical case is that a lot of times the lowest energy state is not necessarily the most populated because the degeneracy of it is low. So in many cases it's non-degenerate. So there's only one way to occupy the lowest energy states whereas higher energy states are going to have a lot more degeneracy. So some configurations are a lot more probable than others. So as I was just saying, if we have all of the molecules in the ground state, so we make our sample really, really, really cold and we try to put all of the molecules in the ground state, that means that our total number of molecules N are all in that state and all of our other states have zero occupancy. So there's only one way to do that. Now let's say that we have two molecules in the first excited state and the rest in the ground state. So now we have N minus 2 molecules in the ground state and then we have 2 in the next excited state and then zero for everything else. Let's look at how to write down the number of ways to achieve this configuration. So you can intuitively see where this is going, right? So there's only one way to put everything in the ground state. It's like you have a whole bunch of pennies and a bunch of boxes you could put them in and if you have all the pennies in one box there's only one way to do that but now if you are going to promote two of them to the first excited state then you're taking two pennies out of the first box and putting them in the second one, you can pick any of the pennies you want so there are more ways to do that. And so the general expression for this relates to how many choices you have. So when you pick the first one out, so you take your first penny out of the box, you have N choices because you could pick absolutely any one of them. But then when you go to take the second one you only have N minus 1 choices left because you already took one out. So this is how we go to write these down. So there are N sub 0 factorial ways to select that bin. And so in general here's an expression for the number of distinguishable configurations that we can make with some set of objects that can be put in different bins. And again this is completely general for probability of sorting anything in any kind of way. Here we're talking about the specifics of putting molecules in different excited states but it's completely general. So another thing to remember with these systems of large molecules is that they're really, really large. So if you have Avogadro's number or even more molecules moving around, the system is fluctuating randomly all the time and it's almost always going to be found in the more likely configurations. And so that's why sticking everything all in the ground state is really, really unlikely. So when N is large, stuff is almost always going to be found in the more probable configurations. All right, so we can look at the weight of a configuration or how likely it is to happen by defining the weight. So we already talked about this on the previous slide. Now we're just giving it a name. So this is the number of ways that you can achieve a particular configuration and of course that's related to how likely it is. And so we can make some approximations. So we know that, yeah we'll get into the approximations in a minute. Okay so just a quick sort of practice exercise. If we look at 20 identical objects with six different states they can be in and they have the following configurations. If you have a calculator work this out and see what you get. What you will learn is that the number is surprisingly large. So even for, you know we only have 20 things which you could imagine a cluster of 20 molecules is really unrealistically small. You know again in the case of molecules we're talking about Avogadro's number or even more things going on. But here we only have 20 molecules and we have six, let's call them vibrational states. So we have stuff in the ground state and maybe five excited states. So you know a very small system in the chemical sense. And we get something like 9.3 times 10 to the 8 configurations for this really unrealistically tiny system. So this is something to remember when we're talking about partition functions and as you're developing an intuitive sense for which states are going to be more populated than others. At first you know when we look at the intensities of spectra like when we talked about rotational spectra and why the intensities of the peaks look the way they do. You know at first it's kind of surprising that the ground state isn't the most populated. But when we start to think about numbers like these and realize that there's only one way to get the ground state whereas the higher energy states have a lot more degeneracy then we start to see why the lowest energy states are not the most populated. Okay so now we can start to think about using some approximations. So we have this expression for W. It turns out taking the natural log of it is useful because we can rearrange some stuff just using the properties of natural logs. And we can write this thing in a little bit different form that enables us to use Sterling's approximation. And this is a really nice thing to be able to do because taking factorials of huge numbers is difficult. It's computationally intensive when we start to talk about realistic systems and it provides lots of opportunities for making mistakes. So it's useful to be able to use these approximations. So Sterling's approximation is just natural log of X factorial is approximately equal to X ln X minus X. And so we can use that to get an approximate expression for the weight of our configuration. So again this just simplifies things and makes our lives easier and in the actual systems that we're generally talking about there are so many different confirmations and ways to achieve them that this is a fine enough approximation. Okay so now the next thing that we want to do is try to find what's the dominant configuration. So we said that the ground state is not the most populated because although it has the lowest energy it's unlikely because there's only one way to get it. So how do we find the dominant configuration? So to do that we're going to want to maximize the weight so we want the maximum likelihood of being in a particular conformation. And so we're going to do that by varying N sub i the number of molecules in that state i and we're going to look for the first derivative of W to be zero. And so we can write down some expressions like this. So for example we know that if we add up all the total numbers of molecules we have to get capital N which is the total number that we started with and if we add up the numbers of molecules in each state times the energy of that state then we better get the total energy. And then we would like to be able to maximize our configuration. So unfortunately we can't just set that equal to zero and solve for it. That would be nice and convenient. But it doesn't work that way because the populations aren't independent. So if we take a molecule from one state that means we have to put it into another state to do that. They all have to go somewhere. We have a limited number of states for molecules to be in. So the N i's are not independent and we have to worry about that. So instead we're just going to use a variational method to maximize this. So what we end up with is the solution where we have N i over N is going to have these total weights and this constant beta determines the most probable populations. And beta equals 1 over Boltzmann's constant times the temperature. And I know I need to give you some practice problems for this material but exploring these kind of things is something that you're going to need to do in the homework so you can see how it works. But so what it comes down to is pretty simple and beautiful. So we have this parameter Boltzmann's constant that comes out of looking at these kinds of probabilities and it also gives us another way to understand what the temperature means. So one thing that comes up when people take thermo for the first time is people say oh, entropy is really confusing and this is something that ends up sounding really mysterious when you take thermo. I don't think so. I think entropy is actually pretty intuitive. That's just talking about the numbers of ways to get different confirmations. The thing that's confusing is temperature. How does this, what does it actually mean and what does it look like on the microscopic level compared to our sort of everyday understanding of temperature? So it does fall out of this discussion of what's the dominant configuration of our states. Okay so the only confirmations that are allowed are ones that are consistent with having constant total energy. So you can't have configurations that don't, yeah you can't have configurations that don't conserve energy in the same system. So that's another way of saying that if you add up the number of molecules in each state times the energy of that state you have to get the total energy out of that. The populations are not independent and these are our constraints on the system. And so again when we go to minimize our, we're going to maximize our weight and get the dominant configuration. We can't just set that to zero because we said they're not independent. And so we need to use variational methods like for example you can use Lagrange multipliers which means you want to multiply your constraint, each constraint by a constant and add it to the whole thing and then treat your variables as independent. Okay so who has seen Lagrange multipliers and this kind of methodology before? Okay, quite a few but not everybody. Alright I will definitely come up with some practice problems for this. Alright so this is, so what I've done here is I'm just multiplying a constant times each of these constraints that being you know having to add up the total number, add up the number of molecules in each state and get the total number and add up the energy of the molecules in all the states and get the total energy and I'm putting in these constants alpha and beta related to those and then just putting that into my original equation that I'm trying to maximize and then I can treat these things as independent since my constraints are in here. And so now given these constraints alpha and beta that came from the limitations on our system that we know from just physical common sense, now that these constraints are in here now my populations are all independent and I can do this. I can set dLnW equal to zero. And so this expression is going to be equal to zero when NI has its most probable values. And so I get this expression for LnW. And notice I changed the index on the sum before differentiating LnW just to avoid confusion. And so then if we do that, if we differentiate it, here's what we get. And so we can take the derivative of the first term and here's what we end up with so far. And let me get to the derivative of the second term. I know I'm going through this quickly. Basically at this point I just want to get to the result and you can see essentially how we get there and then you're going to do a little bit of practice with this in the homework problems and we can go back and talk about it if people are confused. Okay, so I changed the index on the summation so as to not get the differential equation variable NI confused with what we're summing over. And so if we get this situation where if I does not equal J, dnJ dNI equals zero and if it does it equals one. And so we get this expression in terms of a direct delta function. And what falls out of the whole thing is this. So we had to take the derivative of the first term and the second term separately. And so we get minus ln n sub i plus one plus ln n plus one. And so that just gives us minus the natural log of n sub i over n which comes back to these things that we've been looking at that we get as a Boltzmann distribution. Okay so now let's put this back in in terms of our constraints. So we've got this thing that came out as ln n sub i over n and then remember we had to add in our constraints. So there are constants associated with having to add up the total number of molecules to big n and having to add up all the energies to the total energy. And so that's where we get these parameters if we look at what alpha and beta mean. And we see that beta determines the most probable populations and this comes out in terms of temperature. So again that was fast. If you've seen sort of Lagrange multipliers before or if you haven't hopefully at least you get an idea of where it comes from. I don't expect everybody to get all the details right away. I just kind of wanted to go through it once and introduce it. We will have some practice on this for the homework. The really important thing to take away from it right now is the result and that this is where the Boltzmann distribution comes from. Okay so here are our relative populations if you want to know how much of the molecules present are in state i versus state j. We take the ratio of these Boltzmann factors. And so what we can see right away in the take home message is that the relative populations of two states falls off exponentially with their energy difference. And so for example we can go back to looking at rotational states as we talked about in the early part of the quarter with rotational spectroscopy and actually come up with a quantitative expression for the relative populations of the j equals 1 and j equals 0 rotational states of HCl at 25 degrees. And so this is going to be based on, you know we said it falls off exponentially with their energy difference but it's also going to be based on their degeneracy. So for the ground state there's no degeneracy right? That's there's only one way to do that. For j equals 1 we have three different values of M sub j. We have minus 1, 0 and plus 1. And so its degeneracy is 3. So there are three times as many ways to get that first excited state as the ground state even though it costs more energy to get there. And so again here's what the spectrum looks like. So we already know that the ground state isn't the most populated. The energy of a level with quantum number j is Hc times the rotational constant times j times j plus 1. So the difference between these two states is going to be 2Hcb and we can look up the value of the rotational constant for HCl. And so at 298 Kelvin we have to make sure to put these things in Kelvin so that our units work out. We get about 207 wave numbers for this factor kT over Hc. And so then to get our relative populations we have to stick this factor of the degeneracy in front of it. So we have 3 over 1 for the difference in degeneracy and then it falls off exponentially as the energy difference between the two states. And so the relative populations here are described by this quantity and we get the factor that we have about 2.7 times more intensity in this first excited state than the ground state which, you know, it's not exact but it tracks pretty reasonably with the degeneracy. So we see that we have two factors here when determining the relative populations of states. One is their energy difference and that's important but the degeneracy in a way is even more important because we have a lot of molecules and the probable confirmations are much more likely to be occupied. Okay so again, you know, we're just starting to get into this stuff. We're going to talk about it more. We're going to have some practice problems. The take home message so far is you should be able to look at differences in states. You should know how the relative populations depend on the degeneracy and also the energy difference between them. If you can't reproduce these derivations right now, that's alright. We're going to have some more opportunities to practice. The main thing is just knowing the result. We're going to quit there for today and I will see you next time.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 22. Molecular Structure & Statistical Mechanics -- The Boltzmann Distribution. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:05:23 NMR Population Differences 0:10:34 Statistical Mechanics 0:24:26 Weights of Configurations 0:41:17 Relative Populations 0:43:05 Rotational Spectrum of HCl
10.5446/18930 (DOI)
Good morning everybody. Today we're going to talk about the exam. Anybody have any questions before we do? What's the mean? I don't know. You know, again, it takes you guys an hour to take the exam. Multiply by 207 or whatever. There are four of us grading it. Okay, it doesn't take as long to grade it as it does to take it, but it's not going to be all graded and ready to go by the next class period. Sorry, we just, there's only four of us, we can't do that. So it's almost done being graded. We'll get that finished today. Then it gets sent to wherever it goes for scanning. And then as soon as I get the PDFs back, I will give them to you. But I have zero control over how long it takes to scan stuff. So I know you're anxious to see your score, but it just takes a little while. It's a big class. Okay, so let's talk about the exam. There's a question in the back? Yeah, what you've seen so far, how have people done it? It looks like people have done okay. It looks, looks all right. People ran out of time. I expected that. That's fine. It's normal. But overall, seems pretty reasonable. Okay, so let's get started going over this. All right, so first question. I are spectrum of carbon monoxide. Why is there a gap in the center? Yeah, there's no delta J equals zero transition. So anything mentioning that that transition is forbidden or that there's a, that the specific selection rule prohibits that, anything like that got full credit. Okay, so then the next question is what is the energy in wave numbers of this transition? And, you know, this is really straightforward. You just read it off the spectrum, right? Because since there's no delta J equals zero transition, that spot in the middle where there's no line is, that's where the new equals zero to new equals one transition is. And so if you just read it off the spectrum, you get something like this. It's hard to read from that little plot. So if you've got anything close, that counted. Okay, so the next question is using the spectrum to estimate the force constant for the CO bond. And I'm not going to work all of this out in the interest of time, but basically you need to use that transition frequency. So that's, that's what we do. Of course, this is the reduced mass and K is in newtons per meter. And the actual value, if you look it up for carbon monoxide, is about 1860. You know, again, it's hard to read the transition frequency off that little plot and that sort of determines the answer you get. So if it was consistent and you got something close, you got full points. So that was really straightforward. It's just kind of remembering which equation to use and plugging stuff in. As always with such things, there are opportunities to make mistakes, getting the units wrong and, you know, forgetting to convert things. But otherwise, that one was pretty straightforward. Okay. Next question, is carbon monoxide a perfect rigid rotor? And it's not and a lot of people wrote because life's not perfect, which is funny and probably reasonable. But it doesn't count as a good justification. So, you know, I guess John Mark said that at the review session. But, well, you're also about to learn that life's not fair because that doesn't get you the full credit. So, in order to get full credit, you had to write that there's centrifugal distortion and you can tell because the spacings on the spectrum are not exactly equal. It's kind of squished on one side and stretched on the other. Yes? Yes. I didn't grade that one. You can hope your TAs are nicer than I am. So, I think that's like, you know, when you go to iTunes and you buy a song and they charge you $99 instead of 99 cents, is that close enough or is it completely wrong? I know what I think. So, you can hope the TAs are nicer than I am. I'm not sure. That's like you gave the baby a thousand times too much medicine and he died. It's bad. Well, it is. Those order of magnitude errors cause huge problems in real life. Your Mars rover crashes into the surface of the planet. You don't get partial credit. You get fired. You know, there's it like the, you know, yeah, I'm making fun but like the, you know, the things where you have a small error in the last couple decimal places or whatever because of round off problems, that's not really a huge deal but, yeah, orders of magnitude matter. Okay. All right. So, how would the Raman spectrum of CO look different? So, if you're looking at a Raman spectrum instead of an IR spectrum, how would you be able to tell that? Okay. So, you had to give at least two differences and here are the ones that I could think of. You would have a Rayleigh line in the center of the spectrum which is not called a Q branch. It's a Rayleigh line. So, the Q branch is if you have, you know, something like a radical where there's electron orbital momentum and so the central transition isn't forbidden, then you see that. For a Raman spectrum, it's the Rayleigh line. Other things could be the spacing is four times the rotational constant or the line intensity is smaller. And so, any two of those were fine. Question in the back? You definitely got something for that. I don't remember exactly how we did that one. Yes? I was wondering, I was taking the exam, and I was thinking about a note saying that there's a Q branch and a P branch and an O branch in that spectroscopy. How would that, how does that factor into the whole thing? Okay. If you had labeled all three branches there, it doesn't do it because that's, you know, that's like, we can talk about it later if you want, but it seems like not worth spending a lot of time on. But yeah, if you drew it and labeled all three branches, then that's something different. But yes? Isn't there a difference in the number of peaks as well? What do you mean a difference in the number of peaks? The number of peaks that would show up on the IR versus the Raman spectrum? It would be pretty hard to tell. Because I thought that's what we were using our point groups before, the number of peaks. Sure, but it's the same molecule and it's a linear molecule. So if you had a, you know, if you had a complex polyatomic molecule, that would definitely be true. You would see different vibrational modes. But this thing only has one, it's a diatomic molecule so that it's only got one vibrational mode, right? So that wouldn't really be applicable here. But you're right, if you had a complex polyatomic molecule, that would be totally true. Okay, let's go on to the next one. All right, so for this one, it's sort of a, it's a standard Burge-Sponer plot but with a little bit of a twist which is that there's a break in the middle of it. So what that tells you in practical terms is that it has a weird potential. So this was from a paper in science a few years ago on beryllium dimer and looking at beryllium dimer forming. And at first the, you know, the dissociation energy of beryllium dimer was estimated incorrectly because it has this weird shape in the experimental potential. It deviates really strongly from a normal, worse potential. So why does it have this weird shape? Well, if you think about the electronic structure of beryllium dimer, you can kind of tell it's not going to form a very strong bond which is true. And, you know, since it has this weak bond, it's got a strange potential. And this was something that was a relatively recent result. It was in science a few years ago. Okay, so what does this mean for doing the problem? It means you can't really look at the, you know, the value for new max. So you're used to the X intercept telling you something about the convergence limit. And, you know, here that's kind of hard to get because, you know, we have these two breaks in the plot. But so anyway, the hint was to use the part of the data that's consistent with the Morse potential, which was the first part which is shown by this line here. And so you could do that. And so as always for these things, the slope tells you something about the potential. And so then we know that the Y intercept, which is normal in this plot, and DE is just, all right. So, and that was about 625. And so that's it, pretty straightforward. So the only part, the only part that had information content here was just knowing which equations to use and being able to get this off the plot. Okay, so then moving on to, yeah. I think I got a negative. Well, does that make sense? I mean, so it depends on, you know, you're thinking about, it's an absolute value thing. I mean, you're thinking about putting in, you have to put energy into the molecule to get it to dissociate, right. So that's, so. So if your X is negative, and your BE is positive, then your XE is going to be negative. Right. So if your XE is negative, and your BE is positive, then your DE is negative. But take the absolute value. But that's just, that's one of those things where it's, you know, the convention is sort of, you know, it could be either way depending on whether you're a chemist or an engineer and what you're doing. So, you know, as far as whether it's the dissociation or the energy is positive or negative. Yes. What? For the question, is that for the question? It is for XE, yeah. But. You will lose points if it's a negative? Like a point. It's not, it's not a big deal. OK. So now the next one. Looking at Frank Condon factors. OK. So the question is, write an expression for Frank Condon factor for a transition between these two states. And I wanted you to put in the states, but which is why I wrote down the functions. But so you got almost all the points for doing it in direct notation. So we have these two states. And the Frank Condon factor is. So that got almost all the points. People were really worried during the exam about which one's the initial and which one's the final state. It doesn't matter, right? So we're talking about a transition between them. And that amplitude isn't going to depend on which one is initial and which one is final. So, you know, it doesn't really matter. Yes. Because in your lecture notes, the Frank Condon factor, how do you think vibrational states in the form that's that OK for all the words vibrational states? Oh yeah, that's true. That's, so here's one of these cases where direct notation is ambiguous. And so that's why it's best to write in the states. So yeah, what you're saying is these quantum numbers aren't relevant to anything. Yeah, true. But so what I wanted you to actually do is, you know, write out like the integral. And so I said, new prime equals zero. So that's one. And then H2 is, right? And do you have to evaluate it? No. I just wanted to write the expression. So doing it in direct notation got you most of the points, but I wanted you to make the connection and write down the functions. OK, so the next question is we want to sketch this transition on an energy level diagram. All right, so we've got some electronic states. And, you know, it doesn't really matter what they look like as long as they vaguely look like a Morse potential. And then we have some vibrational states in there. And you should know that, you know, there's some zero point energy. So that would be new prime, double prime, sorry. And so that's basically it. And then, of course, we needed to label the transition. And all right, the way I drew it, it is difficult to do that. And so there's not very much overlap the way I drew it, but there you go. The next question is going to be if you drew the error going down, is that fine too? Yes, because we didn't specify which one's the initial and which one's the final state. So as long as you're consistent, good enough. I just wanted to see if you can make the connection between, you know, writing down these expressions sort of in quantum terms, plugging in actual functions, you know, not evaluating it but putting it in. And, you know, being able to go from that to looking at a potential diagram. Okay, next one, term symbols. So for those who spent endless hours in office hours writing out all the possible excited states of O2, it was time well spent. Okay, so the O2 molecule is in its ground state. And here, you know, there was some potential for confusion because the book sort of doesn't distinguish between the various electron configurations that you can get with the ground state electron configuration. But of course there is an actual ground state. It's, you know, you have to use Hoon's rule to get that. So it's the one where you have the most unpaired electrons. So let's look at this in terms of the molecular orbital diagram. So I'm just drawing the P orbitals. So I've got sigma g. All right, that's my, so that's my molecular orbital diagram from general chemistry of O2. And I'm ignoring the S orbitals because they're filled and they don't give me anything. And so I write out the electron configuration for this molecule, which gives me this. And again, since it's the ground state, we use Hoon's rule and put in the configuration with both of these electrons unpaired and in separate orbitals. Okay, everybody with me so far? Cool. Okay, so then the next question is what's lambda? And it is going to be zero. So we get a sigma term. And then if we, since it's a, so we have to decide whether it's g or u, and so we've got these two electrons in the, they're both in the pi star g orbital. So that's g times g. So this term is even. And they're in different orbitals. So it's minus. And what's its spin multiplicity? Oops. Nope. Just kidding. It's a triplet state because we have two unpaired electrons. I was getting ahead of myself in thinking of the next one. Okay, so that's the ground state. And, you know, there were a couple of, you know, of course you can put those electrons in different places and write a couple more term symbols. There's a delta term symbol, et cetera. Since I just asked for the lowest energy one, that was essentially a waste of time if you did it on this question. Okay, so the next one is we have one electron promoted from the homo to the lumo. And so, you know, again, general chemistry, highest occupied molecular orbital and lowest unoccupied molecular orbital. Some people asked about that. I think people just get nervous and forget stuff sometimes. Okay, so you have, these guys are all the same. And then there are a couple choices here. I'm not labeling the orbitals, but obviously they didn't change. And so, these things give you, there are two states. And if you only got one of them, so you had to get both of them to get full credit. If you only got one of them, we took off a point. So, what do you think? There are various side conversations, but it seems like everybody more or less gets it. Is that true? Okay, good. All right, so for, you know, the endless hours that we spent redrawing this picture paid off. Okay, so the next question has to do with NMR. All right, so this was, this question was about writing the matrix elements for I plus, the raising operator for a spin one half. And so, it's full definition was here. And you want to write this out as, so you know that I plus operated on alpha is zero. This is zero. And then, so I'll write the full thing. There we go. So, all right, so this gives us, there we go. That's more like it. So, one times alpha alpha. And so, the answer is, and if you just wrote the matrix because you had it written down on your sheet sheet, that didn't get full points. You needed to show some understanding of how to do it to get full credit. So, if you wrote out the full matrix or just wrote it out symbolically or, you know, put down the, how the matrix elements work, that all counted, as long as it showed that you knew how to do it. This collection of constants in front in the definition of the raising and lowering operators, I'm just letting you know for purposes of taking the final. You know, it comes out to one in this case, but if it wasn't to spin one half, it wouldn't, as you remember if we, you know, from doing some of the examples at office hours. Okay, so now let's look at NMR spectra. So, this was one of these things where exhortations to read the directions before doing it were really, really important. So, people did sad time wasting things like, there were people who didn't just draw the spectrum for the protons that were labeled. They, like some people actually crossed out my labels and relabeled the whole thing and drew all the protons. And it was sad because it must have taken forever and, you know, then people didn't have time for other things. People tried to draw a proton for the one that's labeled with the star, which of course has four bonds to that carbon. So, you know, I'm not being mean. I'm just pointing out, like, read the directions. There are things, you know, there are things in there that are meant to save you time and, you know, like not having to draw the spectrum for all the protons. Okay, so let's look at what we have to do first to get this. Okay, so we have a chemical shift table. And so one of the first things to do here is identify your various functional groups. So, we have A here. So we have our two protons here. And some people made the mistake of saying that this thing was a carbonyl. So that would be, and, you know, they, so they put it in this chemical shift range. And that's not what it is, right? It's got, you know, it's between these two oxygens. So it's an ether. So it would be toward the, you know, higher chemical shift end of the ether because it's got two of them. But so it should be somewhere around, you know, four or five ppm somewhere in there. Four and a half something. Okay, what else have we got? So D here is an aromatic. It has a proton as its neighbor. C is a methyl group. B has a proton there. And it's got, so B is going to be a complicated, a, multiple, right? Because it's got two neighbors on one side and three on the other and they're not equivalent. But so, okay, we figured out what chemical shift everything should be. So we can start drawing the spectrum. So you get points for labeling the axis. And let's start on this end. Okay, so D, our aromatic here is going to be, you know, between seven and nine ppm. So let's, you know, put it down here. And it has one neighbor. So it'll be a doublet. And then our ether is going to be the next one along in the chemical shift scale. And it just has two oxygens next to it, which we can probably assume are, you know, O16. And so it's just going to be a singlet. And then our methyl group here is going to be furthest toward zero. And it's going to be a doublet because it only has one neighbor. And then we're left with what is B. So it's got two sets of neighbors that are inequivalent. And, you know, there's two protons on one side and three on the other. And I didn't tell you which J coupling is larger. So it is either a triplet of quartets or a quartet of triplets. And if you wrote something like that and drew, you know, this, that's fine. Some people really carefully drew the picture out. You know, again, I want to know if you understand it, not your ability to draw these things beautifully because as you can see, I'm not so great at it. Yes? For the numbers, do we have to like B exact or do we just draw them like in that order? For the chemical shift range, you got points if they were in the right order relative to each other and the chemical shift scale was something reasonable. But, you know, thinking that, that this thing is a carbonyl and putting it at 12 ppm was not reasonable. Yes? It doesn't because it's exchangeable. It means it's popping on and off all the time in, you know, exchanges with the solvent. Although, you know, people who, who mixed in the nitrogen proton and the splitting, I think I just took off a point. It was, you know, if you understood the rest of it, that's not a huge deal. Okay, so that is, that is that one. And then, we said a new spectrum is collected with the carbon decoupling turned off. And we want to see the signal for the proton labeled A. Okay, so the carbon decoupling is turned off. So that means in our, so in our previous spectrum, we didn't see any coupling to C13. Now, the decoupling is turned off and we're going to. C13 is a spin one-half. And proton A, remember that was the one that was an ether and it didn't have any neighbors. So, it was a singlet before. And so, if it's not decoupled, it's a doublet. And that, that one was either right or wrong. And so then, we had an inversion recovery experiment is performed on the C13's in this molecule. So remember, this is the experiment that we do to measure the longitudinal relaxation time. So we put everything, we flip all the spins of the signal and then wait for it to relax and measure how long it takes to relax back to equilibrium. And remember what causes relaxation to equilibrium. It's fluctuating magnetic fields local to that nucleus. So something that has direct dipole-dipole couplings with protons, so if it's directly bonded to a proton, that's going to relax faster than something that doesn't have that. Remember, another important source of T1 relaxation is methyl rotation and also just molecular motion. And so, in all of these respects, you know, here's our molecule again. The one that's labeled with a star is non-protonated. It's aromatic, so it's probably going to be rigid conformationally. And C is a methyl. So the one that returns to equilibrium more quickly is C, the methyl. And basically, any one of these explanations worked. So if you got any reason why it's going to have a quicker relaxation time, that was good. Yes? So what do you mean by steric hindrance? I don't see how that has anything to do with it. But, you know, win me over. What does that have to do with it? Well, it's going to be a more than the A. It's not the argument about rigidity, it's kind of like a ball of the mind. Hmm, kind of, but it's not that it's starically hindered. I mean, if you had like a T-butyl group, that would still relax pretty fast because those metals can still rotate. So, sorry, no. You had to have something about conformational rigidity. Yes? What if you said like it was connected to a lot more atoms and so those atoms affect that carbon, so it's harder, but it takes it longer to come back to equilibrium? Well, but can, so which one is connected to more atoms? I think I said the star one. Well, but it's not, right? They're both connected to three, to four other atoms. It depends, it matters what those atoms are. So, you know, if you, if you, if you point if I said that one took a longer time. We can talk about it later. Wait, so you're asking if you just, if you got, which one took a longer time, right? And not the explanation, did that get partial credit? Yes, it did. Okay, that's it. That's the exam. So, again, the grading is almost done. We'll get it done as fast as we can today and then send it to get scanned. And next time we're going to start talking about statistical mechanics.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 21. Molecular Structure & Statistical Mechanics -- Second Midterm Examination Review. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:01:33 IR Spectrum of Carbon Monoxide 0:10:36 Electronic Spectroscopy 0:15:17 Frank-Condon Factors 0:19:34 Term Symbols and Electronic Transitions 0:25:05 NMR Spectroscopy
10.5446/18926 (DOI)
Last time we ended with talking about some of the operators that are involved in NMR. And you know here we're still at equilibrium, we still have stuff aligned along the Z axis. But this is the starting point for being able to understand NMR spectroscopy from the physical chemistry perspective. Okay, so I ended with this last time. But let's go back through it and make sure that everything makes sense. So we have an operator that corresponds to the magnetization along Z. And for a spin one-half nucleus that means it's either plus or minus one-half up or down. These states are called alpha and beta in NMR. And the eigenstates of IZ have eigenvalues that correspond to their spin quantum number. So we can have plus or minus a half here. And if we operate this operator on its eigenstate we just get that quantum number back in the original state. So okay, not so surprising. This looks similar to things you did last quarter. It's a little bit different system. I think it would make a lot of sense actually to teach NMR first when we're talking about quantum mechanics because the eigenstates and the operators that we are using are very simple and it's easy to see what they do. So hopefully this maybe even clarifies some things that were challenging last quarter. All right, so what that means, we've written this down in a very general way. What that means is that if we operate IZ on alpha we get one-half alpha. If we operate it on beta we get minus one-half beta. And remember these things make up an orthonormal set. Alpha does not equal minus beta. They are in fact orthogonal to each other. So if we take the integral of alpha, alpha, or beta, beta we're going to get one. And if we have a matrix element that looks like this or an overlap integral that looks like this where we have two of these states that gives us zero. So we've seen these things before in context where it's really easy to visualize what it is. So in this case what's the space that we're integrating over? It's spin space. It's a two-dimensional Hilbert space that has to do with these two operators. So there isn't really an easy way to visualize it but fortunately mathematically it's pretty simple. These are the only two things we have going on. We've got plus and minus one-half and we know that alpha and beta are orthogonal to each other. So it's easy to set up the overlap integrals. So now if we want to make matrix representations for our spin operators, so far the only one we really know is IZ. Let's look at how to set up its matrix element. So a matrix element is we have the two states that we're looking at with the operator sandwich. And this is a little bit different way of approaching the problem but it's similar to what we've already done before in the context of talking about group theory. So we made matrix representations for operators in a space or in a context where we can easily visualize what the operator does and then we made up the matrix that way. So now we have a situation where we don't have an easy spatial representation for it and we just have to do it mathematically but we're doing the same thing. So we're going to make matrix representations for our operators. So if we have the alpha-alpha matrix element that means that first we operate IZ on alpha and we get of course one-half alpha and then we can pull the scalar out of that and so we just get one-half alpha-alpha and that gives us a half. If we do the same thing for the matrix element for alpha-beta for this IZ operator, the first thing we have to do is operate IZ on beta which gives us minus one-half beta. Again you can pull the constant out in front and you can see that this matrix element gives you zero. And for all our analyses of NMR operators, you know this is essentially what we're going to do and there are cases where we're not looking at things that are eigenstates and we're going to have to figure out how to write stuff in terms of operations that we can trivially find the eigenvalue for. So all right let's take another look at IZ in the Hamiltonian. So here's my matrix for IZ. And so the way I got that is the factor of 2, you know I've been sloppy about dropping my H bars but they are in there. The factor of 2 comes from the fact that the eigenvalue is plus or minus one-half so I just pulled it out of the matrix and then we said that the alpha-alpha matrix element of IZ is one and then for alpha-beta at zero, same thing for beta-alpha and then beta-beta it's minus one, you know again with that factor of one-half in front of it. Is everybody okay with how I got that? All right good. So now let's look at the Hamiltonian. So for an NMR experiment the Hamiltonian is gamma omega not IZ and so that means I'm just going to do the same thing but I have some more constants in front of it. And we can make the matrix elements the same way. So here again I'm operating the Hamiltonian on the ket first pulling out whatever constants I have and then taking the overlap integral of what's left. And so these are the answers that I get. And we know if we look at the matrix representation of two operators if they're diagonal on the same basis then they commute. So this is really powerful in quantum in general because you know if you know that operators commute that gives you important information about the system. So if stuff commutes with the Hamiltonian then energy is conserved and you can use this for a lot of things. But okay so so far we're just talking about you know when we say this is the Hamiltonian this is the Zeeman Hamiltonian right? So we just have the spins aligned with and against the main magnetic field and we can see the energy difference for that. You know we know how to operate IZ on these states. But we haven't learned anything about the actual NMR experiment because for that we need to be able to apply pulses, we need to look at stuff in the XY plane. How do we do that? So you remember from your homework assignment a long time ago we did you know you proved that the angular momentum operators don't commute with each other. In fact they have this cyclic commutation relationship. Here we're calling them I instead of L because we're talking about a spin and not an actual angular momentum from something rotating. But the math is the same. So we know that these things don't commute with each other. So IX and IY which are the spin operators that we need for looking at our system in the XY plane, those don't commute with IZ and that means they don't commute with the Hamiltonian and it's not obvious how to work with them. So we need to define some other operators to look at that. So here's what the matrix representations of IX and IY are just for reference. And if you don't understand how I got that, that is perfectly understandable. We're going to go through it in a minute. But I want to show you what the answer is before we get there. Okay so in order to get this result we need to define something called the raising and lowering operators. So what the raising and lowering operators do is they raise and lower the states of the system. But here's what they're defined as. So we've got I plus as IX plus IY and I minus is IX minus IY. And so we've kind of alluded to before like we're getting real and imaginary parts of our signal in the XY plane. You can start to see how this fits together with the formalism that we're using. Okay here's what I plus and I minus do. So you have this collection of constants out in front relating to L and M that is your eigenvalue. And then the eigenstate of the raising operator is the state you have plus 1. So if you started with minus 1 half it's going to go to plus 1 half. So if you operate I plus on beta you get alpha. If you operate I plus on alpha there's no state that's higher than that in the spin system that's defined so that's going to give you zero. It's not going to be true for spins that have I greater than half. So if we had a spin 1 we can operate I plus on minus 1 and we'll get the eigenvalue times the state with zero and then if we operate I plus on zero we do that again and get plus 1. For a spin 1 half system we only have two choices up or down. So you can either raise or lower and if you operate the raising lower if you operate the raising operator on the highest state you get zero and equivalent for the lowering operator. Okay so let's write that out explicitly. So I plus operated on alpha gives you zero. I plus on beta gives you alpha. Here I have dropped the constants so you need an eigenvalue in front of this. I will try to go back and put that in before I post the slides. I minus on alpha gives you again your constants times beta and I minus on beta is zero. So why do these operators exist? Why are we going to use them? They have a bunch of uses in quantum mechanics actually but in this particular context what we want is we don't know how to deal with I, X and I, Y because if we have our magnetization quantized along one of those axes now it's not the eigenstates of I, X and I, Y are something else. They're some sort of linear combination of the Zeeman eigenstates but we don't know how to measure that and we don't know how to operate on it. If you measure your normal spin states alpha and beta when they're quantized along I, Y you'll just get alpha and beta with 50% probability and that experiment doesn't tell you anything. We need a way to deal with it. So these operators are defined in terms of I, X and I, Y in a way that they give us something that we know how to take, that we know how to find the eigenvalue of that is going to give us a well defined answer and we're going to go through it right now so you'll see what I mean. Okay so let's find the matrix elements of I plus and I minus in this basis. So again we're still in the Zeeman basis we have a spin one half. So if we operate I plus on alpha we get zero. Now if we operate I plus on beta that gives us our constant times alpha and then we take the integral of alpha alpha which gives us one and again that should be times the eigenvalue. Again operating I plus on alpha gives us zero so that one's zero. If we operate I plus on beta that gives us a constant times alpha but then the integral of beta alpha is zero. So that's our matrix for I plus. And we can do the same thing for I minus so we operate I minus on alpha that gives us beta but the integral of beta alpha is zero. I minus operated on beta is zero. I minus operated on alpha gives us a constant times beta but then beta beta gives us one and so we get the matrix for I minus. So again these are just convenient operators that we can work with in the Zeeman basis and they give us a matrix representation that makes sense. And now we're going to be able to use the definition of these things in terms of Ix and Iy to enable us to work with the eigenstates of those operators which again why do we want to do that because that's the signal that we can actually measure in the experiment. Okay so here's how these work and I'm going to let you use this to verify the matrix representations for Ix and Iy. It's tedious but it's good practice so you can just go through and operate these things and like I showed you in the previous couple of slides and once you work it through once then I think you'll be pretty comfortable dealing with these kinds of operators and you know what the answers are because I showed you earlier in the lecture. And again it's tedious so if you do one of them and you think you totally get it that's good enough but if you need extra practice work through them both. So again here are the answers that you get. So now we're really taking our knowledge that we learned from looking at group theory and being able to make matrix representations of operators and work with them and now we can apply that to a quantum mechanical system where the transformations that we're doing are not obvious. You can't really visualize it in Cartesian space because it doesn't live in Cartesian space. It lives in spin space but because we have these skills of being able to put operators in terms of matrices we can use all of that same formalism to do stuff where it's not so easy to visualize. So now hopefully the point of being able to do that becomes clear. So we practiced on systems where it's easy to verify the answer because you can visualize it. Now we can do these things where it's a little bit more abstract. Okay, as I have sort of hinted at along the way we can have spins in a superposition of alpha and beta. We don't have to have everything just in one eigenstate or the other. And so again this is where the basic textbook picture of NMR goes wrong. You get this idea that everything is either in the alpha state or the beta state. Well it's not. You can have these superpositions. Okay, so you can have a wave function for your spin and again it's a funny wave function. It's not a function in the sense that we're used to looking at. It's a probability mass on either alpha or beta or some combination of the two. So our spin state can have a superposition where we have some amounts of alpha and beta. How much is described by these constants? And we can write that down as a vector. So in that notation here's alpha and here's beta. Why is that useful to be able to do? Because we have all of our operators written out in terms of matrices. And these things are normalized as I said when we talked about how to do the matrix elements. And again it's kind of hard to picture these functions and how they're orthogonal to each other. They're in spin space. It is pretty abstract but they are. So they are orthogonal and they're normalized. All right so now we get to what are the eigenstates of I, X and I, Y. And I'm not going to prove this as to how we get to those as the eigenstates just because there's a limit to how many NMRs or our core dumps we can do but this is what they are. So if we have the eigenstate for plus X, so that's your spin quantized along the positive X direction, it has this particular eigenstate of alpha and beta. So our constants for each of them is 1 over square root of 2. So now if we're in this state the X component is sharp and Y and Z are not. So we can measure along X, you know we're in this plus X eigenstate. Every time we measure along X we're going to get that value. If we measure along Y and Z it's going to be ill defined. It's not sharp in that case. Okay so again we're going to apply our matrices that we have been writing down. To operate your operator on the ket write your spin state in vector notation and then multiply the appropriate matrix by it. So here's what we get for IX operated on the plus X state. And that gives us if we simplify the constants that gives us a half X which makes sense right? That's what we expect. The original state back as the, we said it's an eigenstate and then we get its value of its spin quantum number which is plus one half. Alright so that's one of these things. Let's look at the value for minus Y. So similarly we can write it out. And again I'm not going to show you how we get this as the eigenstate. We're just going to look at the result. And here it is in vector notation. We can also operate IY on it and we see that we get minus one half minus IY, sorry minus Y. And this is what's detected in a typical pulse Tendomar experiment. So if we do even some complicated pulse sequences where we flip the spins through all kinds of gymnastics and make them do different things, at the end we have to end up with minus Y as an eigenstate because this is what we can detect. Okay so I'm going to show you how to operate some of these things on our spin states and look at what a realistic Tendomar experiment does. So here are our rotation operators for RX, Ry and RZ. And this is rotating about the X, Y, or Z axes by some angle beta. And when we get into this it should look familiar because they're the same as the rotation matrices that we've been making for rotating some physical object about an axis. Alright so if we have, so beta is the angle, so if we have an operator RX pi over 2 that rotates the magnetization 90 degrees about the X axis. And a rotation operator commutes with the angular momentum operator about the same axis. So RX commutes with IX but it doesn't commute with IY and IZ. And so for a different angular momentum operator we have this kind of a relationship. So there's a lot of math and it's a little abstract but stick with me because we're going to get back to how this actually works in the real pulse Tendomar experiment. Okay so we want to apply a pulse with phase X. So we have our spins, they're aligned along the Z axis, they're quantized along Z, they're in the alpha and beta states. And we want to put them into a state that we can measure. So we apply an X pulse and that's going to take us from whatever our starting state was to a final state. And so we're going to do that by operating our operator on the initial spin state and so that means we're going to take whatever spin state we started in, write it in vector notation and then multiply the rotation matrix for the pulse by it. And again this is called the pulse propagator. And this beta P is the flip angle of the pulse. So pi over 2 and the example that we were talking about. Great so let's back up and talk about that for a minute. This is something that is, it's treated in your book in kind of a hand wavy way and I want to really show you how it works. It's important to understanding this. Alright so we talk about how the pulse Tendomar experiment works and we say that we have our spins along the Z axis and then we apply a pulse that's on resonance so we have the right amount of energy and it flips the magnetization into the XY plane. Well how do I get it to actually be exactly perpendicular with the main magnetic field, right? So you can imagine if say if it's off resonance a little bit or if the pulse just isn't strong enough we can tip it part way down and we won't see a very strong signal then because we can only measure the projection along the XY plane. If it's too strong say and it rotates it farther down then we're going to see a weaker signal there too and in fact we can rotate it 180 degrees and just invert the magnetization relative to how it was at the beginning and then we won't see anything because it will just be along the negative Z axis. So how do we know that our pulse is actually a 90 degree pulse? The answer is we typically measure this experimentally. I mean you can calculate it and get close but we optimize this experimentally. The flip angle depends on the nutation frequency of the RF so that is, you know, you can imagine the RF as, you know, so it's an oscillating electric field in, you know, in a direction that's orthogonal to the main magnetic field but we can also imagine it as inducing oscillations in the magnetization. So we start along Z, if we apply a pi over 2 pulse we tip it this way, if we go too far and give it 180 degree pulse it goes like this and you can imagine if we're looking at the signal for that we get this oscillatory behavior and so we can express that frequency, you know, in frequency units and so we're measuring the strength of the magnetic field in a way but in frequency units and that tells us about, you know, how much power we have to flip these pulses. So this flip angle is that field strength in kilohertz times the time, the length of the pulse. So, you know, we have an angular frequency times a time and so that gives us an angle. Let's look at what that looks like. So here's a rotation matrix for a pulse of flip angle beta and again notice how it looks just like the rotation matrix that we used for looking at physical rotations of molecules in a particular coordinate system. So these things that we have learned are definitely applicable to this system that's a bit more abstract. So again let's see what this looks like. So we have our magnetization vector initially along z. We turn on the pulse. We have calculated the flip angle and the mutation frequency to be exactly right so that it's a 90 degree pulse. And that's going to give us our magnetization in the minus y eigenstate. It also picks up a phase factor which is this extra little e to the i pi over 4 which we shouldn't worry about right now. But this tells us about, you know, how we can actually experimentally make these spins flip. So can you guys please set down the side conversations. It's very distracting. It's distracting to me and I think it's distracting to other people too. Okay so let's look at experimentally how you do this. So this is something that my lab does. We build NMR probes. We build the RF circuits that produce these radio frequency pulses that flip the spins and it turns out that there are lots of experiments that we can do that you have to develop special hardware to do. Grad students and actually a few undergrads in my lab have worked on this. So you know that means that we work in the machine shop, we build electronics. It's pretty interesting stuff. So here's what the probe looks like. So it's really long because it's inside the magnet. So when you see pictures of NMR magnets or you go to the NMR lab to run your experiment, if you're just a casual user and you don't build this stuff you don't see what's actually doing most of the interesting stuff. So inside the magnet there's a little coil that is the thing that delivers the pulses to the sample and it also listens to the signal that comes back and that's just represented as the inductor here. But the device is long because that coil has to be located in the very center of the magnetic field. So it's inside the magnet. So the actual business end of it is relatively simple. So we have this parallel resonance circuit. So that's the inductor in parallel with the tune capacitor that's called CT. By adjusting that variable capacitor we can change the resonant frequency of the circuit. And then you notice there's this other little capacitor in series with it. That's the match capacitor. We can adjust that to zero out the imaginary part of the incoming RF. So that matches the signal to 50 ohms. We have to have the pulse that's coming in impedance match to the load. So this is how we experimentally deliver the RF pulses. And so what I wanted to show you is we've been talking about nutation frequencies and how we can measure that. These are some that are experimentally measured for some real probes. So one thing to notice is that we have three of these. So we're looking at protons, carbon, and nitrogen. So when we're doing a multi-dimensional NMR experiment, we talk about, okay, we can look at proton, carbon, nitrogen, phosphorus. For every nucleus that we're looking at we have to have a separate channel of the probe. You need a separate RF circuit to be able to interact with that. And particularly when we get into talking about, okay, we're going to look at proton and detect, we're going to look at proton and decouple carbon, you have to have two channels to be able to interact with that. And that means that you need two of these RF circuits and they're all coupled. So why am I showing you this? Just to give you a feel for, we can talk about all this stuff in theory. And it's neat. It works out really nicely. But it just doesn't give you a feel for what you actually do. And so you're getting that flavor for it in the case of NMR because that's what my lab does. If I did something else, then you might be hearing more about IR spectroscopy or something like that. But again, so this is how you experimentally measure the flip angle that a pulse is going to have on your signal. And you can see, like, we have the RF field strength at some constant value. So here for, you know, for, and this is what B1 refers to. So for proton, we had 132 kilohertz carbon. It's 71.4 nitrogen, 86.2. That frequency refers to the frequency at which the magnetization is going around and around in these sine waves. And in that case, it's a measure of the amplitude of the field that's being applied. And it's one of these things where the units are very weird. It's strange to think of a magnetic field in units of frequency. But we do this in NMR all the time. So we're talking about the main magnetic field in frequency units. Like, we usually say we have a 500 megahertz magnet, you know, rather than an 11.7 Tesla magnet, which would be the appropriate SI unit for magnetic field. The reason we do that is we're saying that the precession frequency for protons in that magnet is 500 megahertz. And that's something that's convenient to talk about in terms of NMR. Same thing here. We're talking about the amplitude of the RF field that we're applying, not in units of Tesla or something else, but in terms of how much can we actually influence the spin. And so, again, that mutation frequency times the time that the pulse is applied gives you the flip angle. So if you look at these plots, in the case of the proton, each one of these steps is 0.5 microseconds. So if we apply the pulse for 0.5 microseconds, you see the first point doesn't tip the magnetization very much at all. We get a weak signal. The second one, after, you know, one microsecond, tips it a little bit more. And then we go up to 90 degrees and, you know, and so on as the magnetization goes around and around. I want to point out that if experimentally, if stuff were perfect, this should look like a perfect sine wave and the magnetization should go around and around forever and there should be no limit. That's not how things actually work. So it turns out that your coil is not perfect, that you're using to apply the field. And if you look at, you know, especially the proton channel here, you can see, you know, if you look at the third or the fourth maximum, the overall amplitude is a little bit lower. This thing is starting to decay. That's because stuff starts to lose coherence. As you apply the field for longer and longer, it's not perfect. One reason for that is that your coil is not, you know, an infinitely long solenoid where the magnetic field is the same in all parts of it. It's higher in the middle and it falls off toward the end. And we actually can do things to try to make it better when we're engineering these things. So if for a solenoid, for example, if we just have a coil that's literally wound on a cylindrical form, which sometimes we do use, you can make it stretched in the center and squished on the edges to try to even out the magnetic field and that's something that we do. So here's a kind of a funky looking coil that was built in my lab. And you can see that one of the properties it has is that it has a really nice magnetic field right in the center. This plot is the magnetic field as a function of distance from the inside of the coil. So right in the center, it has a really nice magnetic field and it falls off very quickly at the edges. And that gives us these very nice looking mutation curves. Now they look so nice because in that experiment the sample is restricted to only be in the region where the coil looks perfect. Okay, so we talked about how to deal with our spin operators. You got some homework as far as applying the raising and lowering operators, you know, which is just to give you the experience of working with, you know, how do you apply these operators to stuff and be able to make this work. We related that to pulsed NMR and how we actually see a signal. Now I want to talk a little bit about relaxation and relate that to actual experimental factors. Okay, so where we're going with this is we've talked about, you know, when you, you know, you have your perfect 90 degree pulse, you pulse the magnetization into the XY plane and it's going to relax back to equilibrium and end up back along Z. So so far all I've told you about how that process works is that it's not emission of an RF photon. Our system does not spit out a packet of RF and come back to equilibrium. It does something else. So what? Let's talk about that. Okay, so if I put the sample in the magnet, so, you know, I go to the liquids machine and put my little NMR tube in the top of the magnet and let it sit there, how long does it take for my spins to align along Z? What do you think? Do you want to take a guess? Twenty minutes. Twenty minutes? If you're looking at like silicon 29 or, you know, maybe a carbon that's not close to anything at all, like if it's like a carbon in a perfect diamond, that might be a good guess. If it's a liquid, it's a couple of seconds. But here's what it's not. It's not nanoseconds. So when you put your sample in, you know, one guess that people often make is, well, I have a 500 megahertz magnet, so take one over 500 megahertz and that's how long it takes the spins to align. It's not. It's independent of the mutation frequency. It's a different effect. So what's happening is you have all your little spins in there and they get bumped into by other molecules and so when the molecules move, there are other little oscillating fields and they get bumped and they eventually end up aligning with the field and that process takes a little while but it depends on the spin. It depends on the local chemical environment, you know, what's actually causing the relaxation and, you know, it's on the order of half a second or a few seconds for some typical samples. So you hear about this when you're talking about pulsed NMR in kind of a practical context because you know that, you know, if you give a pulse and you wait for the magnetization to relax back, if you don't wait long enough, the second scan, you're not going to get very much signal. So if I only wait for it to come halfway back and then pulse again, I'm going to get a smaller signal the second time. Of course, that's not what we want to do. We want to signal average over a long time and add up many scans. So we have to wait long enough for that magnetization to come back. So this relaxation that we're talking about is called longitudinal relaxation and that's along the Z direction and so we're decoupling the interaction in the XY plane from this relaxation at this point in the discussion and that's fair to do. So this is what relates to, you know, the spins losing energy to the surroundings and coming back to equilibrium. So here's what that looks like. Here's the functional form of that. So we have our MZ as a function of time minus M0, so M0 is the initial value. That depends on minus T over this constant T1. So T1 is the longitudinal relaxation constant, this relaxation along Z. And notice that this is just an exponential decay. There's no oscillatory component here. We're just talking about the relaxation in the Z direction. Okay, so let's talk about what causes it. So in organic molecules or proteins or things like that, a lot of what causes it is methyl rotation. So in the context of other kinds of molecular motions, we've said a bunch of times that, you know, methyl groups are freely spinning all the time. Methyl groups have a carbon which could be C13, usually it's not, but it has three protons that have a nice strong local magnetic field and they're spinning around. This is something that can cause relaxation. Another thing that can cause it is segmental motion. So if we have a chain, you know, again, stuff rotates freely about single bonds. So in this particular liquid crystal, this is a spectrum that I took, these chains rotate around and that causes relaxation. Another thing that can cause relaxation is chemical shift anisotropy. So if we have an anisotropic chemical shift, we have an electron distribution that's shaped, you know, like say it's shaped like a football, as that moves around, there's a locally changing little magnetic field that's going to induce relaxation in nearby things. There's also dipole-dipole relaxation. So that's like, you know, again, we're talking about, we've talked about dipolar coupling as the little spins act like bar magnets and they interact with each other with a one over our cube dependence. They can also relax each other. So that's why I said that if we're talking about something like, if we wanted to come up with an example of something that has a long relaxation time, you know, on the order of 20 minutes or something, that would be a nucleus that's very isolated. It doesn't have any of these mechanisms for relaxation. So something that would take a really long time to relax would be like a C13 carbon at natural abundance in a diamond. So C13 is normally 1% natural abundance. So that one little C13 in a C of C12 is going to take a really, really long time to relax because it has none of these mechanisms. It doesn't have any magnetically active nuclei near it to interact. It's going to have to give energy to its environment through just lattice vibrations and things like that and it'll take a lot longer. This can be a huge pain for some samples that people are interested in looking at. So for instance, if you have an organic molecule that has a bunch of protons and carbons, if it has a lot of quaternary carbons that aren't attached to any protons, they can take a really long time to relax. And you waste all your time experimentally because, you know, remember you have to wait for the magnetization to relax all the way back along Z and you'll have a few nuclei in your sample that are really stubborn about this and it takes a really long time. Okay, so here's a pulse sequence for measuring that. So let's talk about pulse sequences. Have you seen any of this in organic chemistry? So raise your hand if this looks familiar. Okay. How many musicians do we have? People play music? Quite a few? Yeah. So far, pulse sequences are like musical scores. So this particular one is only one-dimensional. We're talking about protons or C13 or N15, you know, one kind of nucleus at a time. When we look at pulse sequences that are more realistic, we're going to see a whole bunch of lines for, you know, the protons are doing one thing and the carbons are doing something else and the N15s are doing something else and they're all synchronized and it's a lot like a musical score. And so, you know, like a musical score, there's specialized notation. We're not going to get into it too much. I mean, it's fun. But basically what we need to know about this is that if we have a pi pulse, that's 180 degree pulse, that's written as a pulse of longer duration and a lot of times the square is open. A pi over 2 pulse is written as shorter duration and a lot of times the square is black and then the free induction decay is clear. And this single-headed arrow tau means that we're going to do this experiment but we're not just going to do it once. We're going to repeat it over and over again and we're going to make tau longer each time. And that's called an arrayed experiment. And here's what the results of that experiment look like for an organic molecule. So this is called the inversion recovery experiment and I'll just show you in the sort of finger pointing explanation why it's called that. So the pi over 2 pulse at the beginning inverts it. And now I wait some time tau and it starts to relax back to the equilibrium position. But then before it gets there, I pulse it again and detect it. So at the very beginning, it's going to be almost all the way along the minus z-axis. So I'll get a strong signal when I pulse it back along the x direction. But then the next time I make tau a little bit longer so it has a little bit more time to recover before I measure it. And then as we get to the point where it crosses through zero, then I'm not going to see any signal when I pulse it. And then it's going to start coming back. So we will see this logarithmic dependence where we start with a negative signal and then it slowly comes back and then levels off because it's never going to get higher than the original equilibrium value. So here's what that looks like for this organic molecule. So we have, you know, for these different carbon atoms that are labeled here, we see in spectrum number one here, that's our time that we're waiting, everything is down along the minus z-axis. So everything is inverted. And then as we wait longer and longer times, some of these things start to recover and we see that as we would expect from what I just said, the CH3 and the CH2 recover first. These are things that have a lot of motion. They're protonated. They've got dipolar interactions with the protons. And then the things that are in the phenyl ring that have less mechanisms for relaxation take a little bit longer to relax. Let's quit there for now. What I want you to understand about this is, you know, okay, what causes the longitudinal relaxation, you know, conceptually what are the molecular factors causing it? And also I would like you to understand conceptually the experiment that we do to measure this, the inversion recovery experiment. And you should practice operating your matrix operators on the spin states. That is it for today. Have a great weekend.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 18. Molecular Structure & Statistical Mechanics -- Eigenstates & Eigenvalues. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:01:19 Matrix Representations 0:05:21 Zeeman Basis 0:11:31 Raising and Lowering Operators 0:17:01 Superpositions 0:21:16 Spin Operators and Eigenstates 0:23:46 Pulsed NMR 0:29:05 NMR Probes 0:31:09 Nutation Curves (Solenoid) 0:36:41 Spin-Lattice Relaxation (T1) 0:44:10 Inversion Recovery (T1) 0:47:11 Relaxation Along the Z-Axis
10.5446/18925 (DOI)
Good morning everybody. Is my mic working? Can people hear me? Okay, good. A couple of announcements. First, I unfortunately have to cancel office hours today. I have a dentist appointment and I don't think I can promise to get back in time. So sorry about that. I just thought it might work, but I'm not sure who knows how long it takes. Also, I'm kind of sick and I think nobody wants this cold. So we'll just have office hours tomorrow. And as always, if you have more questions, please post stuff on the Facebook page. Also, I just want to comment about the P-Camp seminars. So lots of people are going. That's really great. More asking questions and participating and that's really cool. I just want to mention most people are doing what they're supposed to, but I have noticed that it is a large group of people in there and there is some inappropriate side conversations and stuff going on during the question and answer session. Please don't do this. It's really great to have everybody there and it's good to ask questions and participate in the discussion, but when everybody is talking among themselves during the discussion or leaving in an obtrusive manner, it's not great. And when the speaker is making jokes about people leaving halfway through, that's really not so cool. These people are visitors to UCI and we want to give them a great impression. And again, most people are doing exactly what they're supposed to just make sure that you are one of those people. Does anybody have any questions before we get started talking about NMR? A couple people came and tried to ask me stuff while I was setting up and I didn't have time to answer right then. So I know there are questions. Anybody want to ask them? Yes? When can we turn them into a worksheet for yesterday's seminar? When can you turn them in? You can turn them in whatever. I mean, you can stick them under my office door. You can give them to your TA. And you want, ah, that's another thing that I wanted to mention. I have most of the extra credit seminar things graded. I know that there's a stack of the Heather Allen ones that are in my office somewhere that I need to look for. So if you still haven't gotten your score for that, sorry about that. I will look for it. Also I'm a little behind on the regrades. I planned to catch up on this stuff this weekend and I was kind of sick. So sorry about that. I will get it done pretty soon. Any other questions? OK. Let's talk about NETMR. So last time we left off talking about the Zeeman effect, which is the condition where anything that has a nonzero spin, so electrons and some atomic nuclei, have the degeneracy of these states broken in magnetic field. So if we have our little spins and there's no applied magnetic field, they're just all in random orientations and they all have the same energy. And if we put the sample in a magnetic field, then now we have a quantization axis and the degeneracy is broken. And I wanted to put this up here. This is from an organic chemistry book. And this is the explanation that you see pretty often where you have all your little spins are in random orientation and then you put it in the magnetic field and they all magically either go into the alpha or beta state where they're up or down. That's not actually what happens. They don't all have to pick one or the other of these states. In fact a lot of them are in different superposition states. There's still a random distribution of orientations of the spins. But what this means is that's your quantization axis. So if we measure values of the spin, we're going to be able to measure states that are either in the alpha or beta state and the rest of them are not going to be well defined. The thing that is correct about this is that alpha and beta have different energies as opposed to the condition where there's no magnetic field and they're all degenerate. And also this change in energy for the different spin states is really small. And later on toward the end of the NMR discussion when we get into talking about Boltzmann distributions and start moving into stat mech, we'll see exactly how small this energy difference is. And it's really amazing that this works at all. NMR depends on these very small population differences. And when we're looking at a typical NMR sample, most of the nuclei are not giving us any signal. So there are almost equal numbers of spins in the alpha and beta states and most of them are just canceling each other out. And it is amazing that it works at all. So okay, since the energy difference between the alpha and beta state is really small, we want to maximize that as much as we can in order to get more signal. And that is one of the reasons why people like to have bigger and bigger NMR magnets. There's another reason also that has to do with chemical shift dispersion and being able to separate out nuclei that are in chemically different environments. So having a higher field magnet gives you both greater sensitivity and greater resolution. And here's a plot of what that looks like. So the energy differences between the spin states for a particular nucleus or for an electron are determined by the strength of B naught, the main magnetic field. And so here are just some pictures of instruments that we have at UCI that we have 300 megahertz instruments. We also have a 600 and there is an 800 megahertz magnet, which is the large one here. Okay, so just to show you some of the high end instruments that people use, the thing that looks like it lands on Mars is the Oxford 900 megahertz magnet. It just has a bunch of fancy packaging, the platform around the top and everything is just for show. But it is important to make really big magnets to get higher resolution of the chemical shifts. And then the lower picture is a high field MRI scanner for medical diagnostics. And the same thing applies there. So in imaging, instead of looking at local differences in the magnetic field from the local chemical environment of the nuclei, what we're looking at is essentially all water and magnetic field gradients are applied in order to make apparent chemical shift differences that are spatially encoded. And it's desirable to have bigger and bigger magnets for that too because the larger the magnetic field, the higher your signal is and if we apply larger gradients we can get finer and finer resolution. But the problem with that is that the magnetic fields and in particular the RF that we have to use start to actually interact with your brain at these levels. So we have to be careful about applying too much power and heating tissue up. And also if you apply very strong magnetic field gradients, it can actually induce electrical signals in your brain and you see flashes of light. And it's kind of interesting but not what most people want to experience when they go in for an MRI. So this picture of a brain is actually my brain. I had it scanned at UC Berkeley while I was a postdoc because one of my friends does this kind of research. And so I got to experience fun things like turning the gradients up really high and seeing flashes of light in there. So it's neat and in these research instruments people use really high fields but for the ones that are actually in the clinic you have to be a little bit careful because random sick people are not interested in experiencing these things. So back to talking about the Zeeman effect. Let's put this in terms of quantum mechanical things that we've seen before. So we mentioned that spin up is called alpha and spin down is called beta. Alpha does not equal minus beta. We have gotten into this when we're talking about the, you know, doing term symbols and looking at electronic states. The individual electrons are interchangeable and that goes for nuclei as well. But, you know, if you have an alpha and a beta they don't cancel each other out except in the sense that if you have equal numbers of them you're not going to see an NMR signal. All right, so this energy difference, the difference between beta and alpha, again, is directly proportional to B naught. So this gamma here is the gyromagnetic ratio which is, you know, we can, it has to do with the structure of the nucleus. You can take it as pretty much a fundamental physical constant for a particular kind of nucleus and that's something that we look up. So for a particular type of nucleus whether it's a proton or a C13 or whatever we have this gyromagnetic ratio. We have a factor of H bar and B naught. So if we want to increase our signal at this point really all we can do is increase the strength of the magnetic field. It turns out there are other things that we can do to increase the polarization difference. We can use what's called hyperpolarization and if we have time maybe I'll talk a little bit more about that later. But in terms of traditional NMR and EPR techniques for increasing the sensitivity all you have is increasing the number of spins or making the magnetic field bigger. And again it's nuclear magnetic resonance. So the resonance condition is that the energy of the RF that you put in has to be equal to the energy difference between these two states or you're not going to see a signal. Okay so here's our nuclear spin Hamiltonian and just like we talked about in electronic spectroscopy we're going to treat the nuclei and the electrons separately. And so here we're worried about the nuclear spin Hamiltonian and so we're going to ignore the electrons except as a time averaged local magnetic field that the nuclei see. And this is why NMR is useful to chemists. We have these local magnetic fields that depend on the distribution of electrons around the nucleus which of course are primarily due to electrons in the chemical bonds and that's what enables us to find out things about structures. So if you go back in the early, early literature 50 years ago physicists discovered NMR and they discovered the effect and they were really excited about it. And the original paper where this is described they're kind of speculating about what it's useful for and they said well maybe it would be useful as a really accurate means of measuring the strength of magnetic fields except that there's this crappy thing called the chemical shift where a proton doesn't just behave like a proton it's different depending on the chemical environment that it's in so that makes it less useful. And of course that's the whole reason that this is useful as an analytical technique because we do have differences in the local chemical environment that have to do with the molecular structure. So the lesson there is the application that you think might be most useful for something isn't necessarily what it will end up being used for. You know if you're lucky you publish something and people in different fields pick it up and find other stuff to do with it. And also it's good to do basic research. You never know what applications things will have. Okay so when we're talking about our spin Hamiltonian there are all kinds of terms that go on in here and here's a graphical representation of what the different interactions are in NMR. Okay so notice we have different plots for solids and liquids. So in organic chemistry I'm pretty sure you've mostly just seen solution state NMR that's most of what we're going to talk about in PECM too but we'll talk about solids a little bit because they have a lot of interesting effects that are not present in solution and also that's what I do so you get to hear about solid state NMR. Alright so in this Hamiltonian for your nuclear spins we have all these different terms and here the size of the circles is proportional to the relative sizes of the interaction so it's just to give you an idea. So the first term is the Zeeman interaction so that has to do with what kind of nucleus is it and how big is the magnetic field and that is under normal experimental conditions that's almost always going to dominate. So then the next term here is the RF that's the radio frequency pulse. So again remember we put our spins in the big magnetic field and they line up but that's boring that doesn't give us a signal. We have to change the quantization axis and get them to release some energy that we can measure and that's done with the radio frequency field and I'm going to tell you some details about how we do that and you know equally for solids and liquids this is the next most important term in the Hamiltonian. Did you talk about perturbation theory last quarter so who knows what I'm talking about when I say perturbation theory? Sorta? Okay so you can think about the NMR Hamiltonian here as your unproclaimed, you know, you unperturbed term is the Zeeman interaction and then the first order perturbation to that is the RF and then we have all this other stuff going on. Okay so the next thing involved is the dipolar interaction and so this is a spatial interaction between the nuclear spins so we can treat them like little magnets and these little dipoles interact with each other through space and that interaction goes as one over our cube and it also has an orientation dependence and you can imagine that this is really useful in solving molecular structures, you know, we have an orientation dependence and we have a distance dependence for these little dipoles and in solid state NMR this is in fact where we get a lot of our structural information but it also makes the spectra more complicated. Notice that it's not there in liquids that's because in solution the molecules are tumbling isotropically they're moving around really fast on the time scale of the experiment and so anything that has an orientation dependence is going to get averaged out. Okay so the next thing down here is the chemical shift so for solids this is quite a bit smaller than the dipolar interaction but for liquids this is the next largest term in the Hamiltonian. The chemical shift is again this interaction between the nuclear spin and the local magnetic field that's there as a result of interactions with the electrons and we're treating the electrons as just this smeared out time averaged magnetic field that the nuclei see. Notice that the chemical shift is larger for solids than it is for liquids. That's because there's an isotropic part and an anisotropic part and again in liquids everything is moving around really fast and it gets averaged out and solids that isn't true. Okay so the next item down is the quadrupole interaction which is, it can be quite large in solids and what that is is the interaction that's due to nuclei, it only exists for nuclei that have spin greater than a half and in liquids this is also averaged out so nuclei with spin greater than a half include deuterium, nitrogen 14, lots of metals, lots of things like sodium, we'll see some examples of that later on but again we don't have to worry about it in liquids. And then the last small interaction here is the J coupling and that's the scalar coupling, it's this interaction between the nuclei that is transmitted through the bonds and as the name implies it's a scalar so it stays unchanged regardless of the motions of the molecule and so it is there in both solids and liquids and it's something that we can use to tell us something about the structures of the molecules as you've most likely seen in organic chemistry. Okay so that's kind of an overview of what the terms in the Hamiltonian look like and we'll see this picture again as we go through the different interactions. Let's go through and talk about how this experiment works. Okay so if we have our pulse-tenomer experiment this is a little bit different from other types of spectroscopy. So again if you open up your organic chemistry book depending on which one it is it might have an explanation of NMR that's not quite right so a lot of them I was horrified to discover recently have this picture where you put in the radio frequency pulse and your spin state goes from alpha to beta and then a photon gets emitted and you detect it. That's not actually how it works I mean that's analogous to other types of spectroscopy but that's not really what's going on in NMR. So remember we talked about what happens when you have some excitation you put energy into a system and there are all these different mechanisms by which it can relax back some of which we can measure and some of which we can't. In NMR the relevant relaxation mechanisms are all kinds of other things other than your system spitting out an RF photon that's not really and stimulated emission is not really an important effect here. So instead what we see is we deliver a 90 degree pulse and put our quantization axis into the XY plane and then we see this free induction decay. Remember we have the magnetization relaxing back to the equilibrium position after we release the pulse and it has this dependence because we're detecting in the XY plane so we get a decaying exponential convoluted with a cosine function. And again remember our Fourier transforms so the FID has this kind of a functional form and then the Fourier transform of that is a Lorentzian which we approximate with this first term and there is an inverse relationship between the length of the FID in the time domain and the width of the Lorentzian in the frequency domain. So if we have a signal that takes a long time to die away that's going to give us nice narrow lines if it dies away quickly then we have broad peaks we're going to talk about the things that might dictate that a little bit later on and as a result of this we get a spectrum. So again it's a completely different mechanism from the CW case where we sweep the frequency and see how the sample responds at different energy levels. We're putting in a pulse exciting the whole thing at the same time and then taking the Fourier transform. Okay so the information that you get is on the basic level largely independent of whether you're doing CW or pulse denomar except that in the pulse denomar case it works a lot better but the information that we're getting in the chemical sense is essentially the same. So here we're just looking at protons but this holds true for any kind of nucleus that has a non-zero spin and we can see in the denomar signal. So protons in a particular kind of chemical environment are going to have a characteristic chemical shift and so this tells us a lot about what kinds of functional groups are present in the molecule and what kinds of structure we have. And so this table is something that I'm sure you've seen before in organic chemistry books and these are useful things to know. It's good to know where different types of protons show up roughly in terms of chemical shift. When I say it's good to know that means there are likely to be exam questions where you have to sketch the spectrum of some molecule and I will give you some kind of basic rudimentary chemical shift table but it's good to have a general idea about how this stuff works. So you know in organic chemistry you get really complicated spectra and you have to figure out the structure of molecules. For PCAM I'm likely to have you do it the other way. I'll give you a molecule and you have to predict what the NMR spectrum looks like because that's really what it's about. We want to understand how the spectroscopy works. So here's a spectrum of a molecule and you can see the methyl groups show up between 1 and 2 ppm as we expect and then the methyl group that's attached to the oxygen has an increased chemical shift. So does everybody remember what the chemical shift is from organic chemistry? I realize I'm not going over this but I think it's review for everyone. Is that true? Yeah okay. So I will just say it has to be defined relative to some reference. That's usually TMS, tetramethylsilane so it's just a silicon atom with methyl groups all around it. Part is defined as being 0 ppm. So if you go measure an NMR spectrum without having it referenced, if you have an old instrument like the one in my lab you will get this axis in kilohertz basically so you just have a frequency scale and the ppm scale is parts per million so it's kind of like a percent but it's out of a million and that is relative to the main magnetic field and relative to what do the protons in TMS generally. There are other references that you can use for different things but that's standard for a lot of organic molecules. Okay so that's the chemical shift from kind of a practical perspective you know how do we want to use this to see what structure molecules have. Let's look at it a little bit more as far as where it comes from. So what we're looking at here is the electron cloud around a particular spin and the electrons are making a local magnetic field depending on their distribution and that causes the nuclei to see this local effect that either adds to or subtracts from the main magnetic field. So here's a molecular model of glycine just so you can see this and I'm showing you a C13 spectrum just to remind everybody that we don't have to look at protons all the time. There are lots of other nuclei that give interesting NMR spectra and if we look at the molecular model and picture the electron clouds it's really clear that the carbon that's attached to the that's the carbonyl carbon it's attached to two oxygens is going to have a very different distribution of electrons than the methylene. And so you know here I've labeled these the two carbons in red and blue schematically just to indicate that this has the same general trend as protons. So methyl carbons are going to be methyl or aliphatic carbons are going to be you know have lower values of chemical shift and you know things that are attached to something like a carbonyl are going to be at higher chemical shift values just as in the proton spectrum. Okay so typically what people do with this in a synthetic context is get more or less a fingerprint of a molecule. So you have one dimensional proton spectra and you know they get more and more messy and organic chemists are really good at looking at these things and pulling out structures. So you know I know Professor Nowick teaches a graduate NMR class that's all about this kind of stuff. So it's all about you know how to interpret really complex spectra and get structures of organic molecules. I also teach a graduate NMR class that is all about Hamiltonians and you know how do you write your own pulse sequences and really developing the spin physics of NMR. They're very different skills you know and we have joked that we couldn't pass each other's finals which you know may or may not be true but there really are very different ways to approach it and what I'm going to try to give you in this class is a little bit of the physical chemist perspective on NMR. So you know don't lose sight of the fact that you can use this to solve the structures of molecules and it's fantastically useful in a synthetic context but there's a whole field of NMR research where we do something else. Okay so back to talking about chemical shift. Let's look at this what this looks like in the solid state. So so far we've talked about chemical shift as though it's just a number. So we have a different distribution of electrons around the nuclei and as a result of that they experience a magnetic field that's adding to or subtracting from the main magnetic field and they show up in a different place on this spectrum. Well that's only true if your molecules are moving around really quickly on the time scale of the experiment and averaging out orientational effects. If we have something that's in a solid so say we have a protein in a crystal and let's say it's a single crystal so that it has a really well defined orientation. If we look at a carbonyl carbon in the protein backbone if we look at that double bond between the carbon and the oxygen and think about the local field that the carbon is experiencing as a result of those electrons if it's staying still we can easily imagine that this is not isotropic. So that carbon sees a different local magnetic field in the x, y and z directions and you'll see a signal for each of those and it gives this funny line shape and that's called chemical shift anisotropy. Again it's averaged out in liquids we only see the isotropic value which is essentially the average value but in solids this is really important and as with many of these things it's a double edged sword it contains a lot of information so we can fit this line shape and get very detailed information about exactly how that carbonyl is oriented relative to the rest of the protein certainly relative to the main magnetic field. This is really useful in context like looking at a peptide in a membrane protein where you want to get the relative orientation of that carbonyl with respect to the membrane. However if you have a whole protein worth of line shapes that look like this and they're all overlapping that's a little bit hard to deal with because it's difficult to separate out the signals because they're all overlapping and a lot of solid state NMR methods development is about how we deal with this. Putting in these interactions selectively during the times that we want to see them and that can be done either with selective labeling involving putting C13 in specific places in the sample or it can be done spectroscopically. So chemical shift as I alluded to on the previous slide is not in solids it's not a number it's a tensor and so we can show you know we can make matrix representations of the Zeeman effect which here I've omitted the gamma and H bar and then our chemical shift is a tensor in three dimensions and you don't really have to worry about this except on the conceptual level. I'm not going to ask you to do anything with it but I do want you to know that it exists and that there's more to the picture than just the solution state idea where we have just the isotropic value. All right so here are some pictures of actual chemical shift tensors depending on the shape of the electron density around the nucleus and you can see they look really different depending on whether you have a prolator oblate ellipsoid or if you have something that is centrosymmetric versus something that's completely asymmetric. And so there is this orientation dependence that can be fantastically useful or it can be a nuisance if you have a bunch of these things on top of each other. Okay so that's sort of the rundown of the chemical shift and everything that's associated with that we will come back to it and talk about it some more. Let's talk about, let's go back to our organic chemistry picture of structural elucidation with NMR. So if we're talking about protons or carbon or N15 or anything like this there are some features that tell us something about the structure. So the number of signals is the first thing that gives us a clue about what's going on. That tells us about the number of chemically in equivalent nuclei. The position of the signals, the chemical shift tells us exactly what functional groups are present. The intensity of the signals if we integrate the area under all the peaks tells us about the relative number of protons. We have to be very careful about using that for heteronuclei, things that aren't protons. And the reason is because magnetization gets transferred from proton to C13 in the course of a lot of the experiments that people typically use. And so you can't just take a C13 spectrum under typical experimental conditions and assume that it's quantitative because you're also going to be seeing information about which carbons are closer to the protons than others. But for protons that is a good assumption. You can integrate things and find the relative numbers of them. The last thing that's important is the spin-spin splitting. So in the solution context this is mostly going to be due to J coupling. And this can be again between, it can be homonuclear or heteronuclear. So it can be between protons or if you, depending on how you do the experiment, it can be between protons and C13, protons and N15. And there's, that gives you information about connectivity of chemical environments. And I will also add, if we're talking about solids, dipolar couplings are very important in learning about the structure. These give us long range distances. Okay, so let's look at some practical examples. And the goal here is to tie together what you already know from organic chemistry, you know how to look at these spectroident practical way with the underlying physical chemistry concepts of what's going on. And you know if that's not happening please feel free to ask questions. All right, so let's look at some typical examples. So just a reminder, in order for protons to give different NMR signals they have to be chemically inequivalent. So protons that are occupying sites that are the same in the molecule or that look the same when things are emotionally averaged will show up at the same place. So for this particular molecule the methyl protons are labeled in blue. We have free rotation around single bonds in solution. You know everything is isotopically averaged and all of those methyl groups show up in the same place. The same thing for the two methylene protons here. Now again this is something that wouldn't necessarily be true in a solid. If we had this molecule crystallized and things were really rigid it's possible that the way the particular crystal structure worked out that some of these protons could be closer to other things than others and we would see splittings. In solution that's definitely not going to happen yet. You have to assume that everything is moving freely. Okay so the number of NMR signals is going to be equal to the number of chemically inequivalent types of protons in your compound. So here are just some examples where you have different numbers of different kinds of protons. And again here are some examples that are going to give slightly more complicated spectra and we'll revisit some of these molecules as we talk about drawing these kinds of spectra yourself. And again you have to take into account the rigidity of the molecule. So in this cyclopropane with a chlorine in one site this thing can't flex very much so the ones on the bottom are not equivalent to the ones on the top even if they otherwise look symmetric. So the intensity of the signals also tells you something assuming that we're talking about protons and you can't just measure the height you have to integrate it because peaks might have different widths even in the same spectra. You know you can have, you know again the peak width depends on the relaxation time and that can be different for different types of protons even in the same sample and we will see how that works. So again this gives you a ratio not an absolute number of protons that we have but it does give us a good relative idea of how many of each type there are in the sample. Okay so getting back to the quantum mechanical underpinnings of this stuff we've mostly been talking about spin one half and I'm sure that's pretty much what you've seen in your previous work on these things. There are also nuclei that have spin greater than half and I alluded to this a little bit talking about the quadrupole interaction and these things are important and we are going to do some problems pertaining to them later on in the class. So for example in organic chemistry you assume that if protons on a molecule are deuterated that you're not going to see any signal from the deuterium and that's true if you're looking at the proton resonant frequency. So one thing that's nice about NMR is that it's incredibly specific in terms of the resonant frequency of the nuclei. If you're looking at protons you're not going to see interference from other kinds of nuclei except indirectly through the J couplings if the coupling is strong enough and it turns out that the J coupling between deuterium and anything else that you're going to see is sufficiently weak that you often don't have to worry about it. But deuterium is a perfectly fine NMR nucleus with spin one and in my lab for instance we look at it all the time and lots of NMR labs do that. So just to give this in a more general way here's the spin quantum number for a nucleus. So it's same as other types of angular momentum that we've looked at. There is an overall angular momentum and there's also a Z component of the angular momentum. So again the math works out just like orbital angular momentum and other things that you've seen in a physics context but here we're talking about nuclear spin. So what is nuclear spin or electron spin? Nobody really knows. It's an intrinsic property of these objects that happens to obey the same mathematical formalism as spinning charges. But you know it's really convenient to understand how to do the math but that doesn't necessarily mean we understand it. Okay so again if we go back to this picture if we have spins that are greater than a half we need to worry about the quadrupole interaction. We can look at our nuclear angular momentum in the same way as some of these other things we've seen in electron angular momentum. You have seen this before the cyclic commutation relationship between angular momentum operators that came up in a homework assignment. And previously we didn't really use it for anything. It was just an example of finding commutators and things like that that we needed to do to look at matrix representations of operators. Well now we're going to use it for something. So these values of the spin angular momentum as you can imagine are pretty useful in NMR because we have you know IZ is the eigenvalue of the Zeeman interaction so the eigenvalue is plus and minus one half that corresponds to the eigenstates being alpha and beta. And IX and IY are what we can measure in the XY plane. So it's good to review your angular momentum operators because we are about to use them. Okay so as I said the eigenstates of IZ are specified by these quantum numbers and we can write them as a ket like this and that's useful when we're talking about nuclei that have spins greater than one half. So for spin one half there's only two states and you can call them up and down or alpha and beta. The ones for nuclei with larger values of I don't have nicknames so you have to represent them using this kind of a ket. So if we operate IZ on the state with values of L and M we get M back as the eigenvalue and the eigenstate is this original state. And so in the Zeeman basis you know again our sample is aligned along Z and we have well defined values of IZ that we can measure. Here our eigenstates are alpha and beta and here's what those look like if we write them out as these kets with values of L and M. Okay so all that means is that if we measure if we have our spins aligned in the magnetic field along IZ and we measure the values of IZ we're going to get alpha and beta in some well defined ratio that depends on the relative populations. If we measure IX or IY we're not quantized along that axis so we'll get random proportions of these states. Okay I also want to point out that the Hamiltonian and IZ are both diagonal in the Zeeman basis and that means they commute. So I have a little bit of an animation fail in here so I'm just going to put everything up and talk about it. Alright so here's our matrix representation for IZ. So our spins are in the Zeeman basis and so what that means is we have this H bar over 2 out in front which has to do with the particular energy values but more important is looking at what the eigenstates are at this point. So everything is in either the alpha or beta spin state and again the one half has been pulled out in front since the values of M are plus or minus one half. And so we have values on the diagonal and nothing off the diagonal which tells us that everything is either in alpha or beta. And we know what our Hamiltonian is. This is gamma omega naught times IZ and so that means if we measure the energy of alpha we get back one half gamma omega naught alpha and similarly for beta we get minus one half gamma omega naught beta. This is the same thing that we've already seen and so we can use that to construct the matrix representation of the Hamiltonian. So we're just applying these operators the same way that we have before with things that were more concrete. You know now we're applying this to the spin states and so we can make these matrix representations of both IZ and the Hamiltonian and since they are both diagonal in this basis they commute with each other. And if that's not 100 percent clear that's fine we're going to spend more time talking about it next time. I just wanted to introduce it so that people have something to think about for the next class. I'm going to post this lecture plus some practice problems for the NMR part later today. Does anybody have any questions before we quit? Yes? So I just want to make sure I understand the NMR. So you put in a pulse. Is that a pulse of magnetic? Are you varying the magnetic field direction? What is the pulse? That's a really good question. Okay so the pulse is a radio frequency field that's essentially producing a magnetic field that's orthogonal to the main magnetic field. And that should be really weak right because like my big magnetic field is you know if we talk about it in frequency units in my lab it's 500 megahertz. The RF field again to give a typical value is maybe 140 kilohertz. So it should be way weaker than the main magnetic field so it's amazing that it does anything. The only reason that it does anything is because it's on resonance. So our nuclei are processing about the main magnetic field really fast and that applied field that you're adding is following it around for thousands of revolutions and so exactly. And so you tip only the ones that are on resonance. And so that's why we don't randomly see carbon signals when we're looking at a proton spectrum because the carbons are at a way different frequency and they're not interacting with that RF. And so your readout then is whether it's resonating with that frequency or not. And you control the frequency that you apply. That's an experimental parameter that you control. And one of the main things that we do in my lab is build probe circuits to apply RF pulses in different ways and change the experimental conditions. I'll show you guys a little bit of that probably later on. All right we are done for today. See you next time.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 17. Molecular Structure & Statistical Mechanics -- NMR -- Part 2. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:02:54 Zeeman Effect 0:06:46 High Field Magnets for NMR/MRI 0:09:09 Nuclear Zeeman Effect 0:11:23 Nuclear Spin Hamiltonian 0:13:28 Relative Sizes of Interactions 0:19:01 Pulsed NMR 0:21:49 Protons Absorbing in a Predictable Region 0:37:58 Spin Quantum Number 0:40:29 Angular Momentum Operators 0:41:40 Eigenstates and Eigenvalues 0:43:35 Zeeman Basis
10.5446/18923 (DOI)
Good morning. Today the plan is to finish up talking about electronic spectroscopy and to start moving toward talking about NMR. So we're going to have a little detour where we look at Fourier transforms and talk about crystallography really briefly. Just because it's neat and it ties into a lot of other things that we've been doing. It uses Fourier transforms. It involves interactions between photons and matter. It's not spectroscopy. We're going to talk about it not in any great depth, but it's neat and involves symmetry. Does anybody have any questions about electronic spectroscopy or anything from last time? Yes? It's hard to hear me. Okay, let me see if I can fix that. Is that better? All right, good. Any other questions about electronic spectroscopy or Frank Condon factors or anything like that? Okay, good. Everybody's ready to take the quiz. Get out a piece of paper. Come on, you walked into that. I would be happy to stand here answering questions as much as you want, but you said you're ready to take the quiz, so here it is. Your lowest quiz is going to be dropped anyway, no matter what. And then the way the seminars work is you get points if you answer the questions. I've been giving five points for answering the questions. You have to answer every question and write a reasonable amount for each thing to get all the points. So the seminars are, in general, worth a little more than the average quiz and how it works is they just get averaged in with your quiz grade. So you can go to as many as you want as long as they're the actual P-CAM seminars or things that I've approved as being related to P-CAM and there's no limit. So I'm sure this would never happen, but if you mess up the quiz every time and you go to a lot of seminars, you can pretty much make it go away. Not that anyone is worried about that today. So let's talk about this a little bit. So it's not actually that difficult, but I think it's a little tricky because it's maybe worded in a different way than you're used to seeing it or you have to pull together a lot of stuff that you've learned from different places in the class. And that is kind of hard. So that's one of the things that I'd really like you to get out of it. So one of the problems with P-CAM, at least starting out, is that a lot of the things that people actually do in our research labs is so involved computationally that we can't really do a realistic example in class. And so what I'd like you to get out of it is an understanding of how we sort of work through these problems, you know, what the concepts are, the cases where symmetry helps us and to really understand the fundamentals of some basic problems. And then, you know, if you become a physical chemist and you do electronic spectroscopy and more complicated molecules, then you can learn all the tools that you need to understand these things computationally. Okay, so I'm not going to spend a huge amount of time on this, but I just wanted to briefly talk about how to do it. Okay, so for the first one is an electronic transition from a sigma plus state to a sigma minus state induced by Z polarized radiation allowed in HCL. Okay, so what do you need to know here? You need to remember that HCL belongs to the C infinity V point group, which hopefully everyone figured out from having the point group table there if nothing else. The other important thing about this question is that if you just remembered LaPorte's rule, you got the wrong answer because that only works for environments where there's an inversion center and of course HCL doesn't have an inversion center. So how you actually do this is you take your character table and look at the symmetries for the relevant species. So sigma plus is A1. We have a Z for the dipole moment operator in the Z direction. That's also A1. And then sigma minus is A2. And you multiply the coefficients for those things together and of course what you end up with belongs to the A2 symmetry species, which is not A1 and so you can say the transition is not allowed. So that's how you do that. So again, it's really easy if you remember how to do it and if you don't, it's confusing. So what I want the take home message to be is just think about the problem and the information that you're given and figure out how to do it. Yes? No, so Z polarized radiation is telling us that it's only along Z. So that's the idea is kind of there but you got confused about the details. Okay, yes? Did we show that when you were similar or in terms of like the transition from plus to minus is forbidden, but in what case? So we talked about various specific cases. As far as getting credit, if you get the right answer and you wrote some reasonable rationale, you get the right answer. But I guess what I worry about on exams is that if you get the wrong answer but you had a reasonable thought process, please make sure you write down enough so that we can give you partial credit. Okay, so that's the first one. The second one, basically I just wanted you to draw a potential diagram for these electronic states. I said it only has two to make it relatively simple. And the business about the upper state being shifted in the X direction by, you know, 1.5 times the equilibrium bond distance is just to show that the upper state is, you know, shifted over in the X direction as far as where its minimum is. And so if you drew something that looks kind of like that and, you know, drew in some vibrational energy levels, then that's good. It's useful to be able to visualize these things. And then for the expression for the amplitude of the transition, basically what you need to do is recognize there that the electronic part of it doesn't really enter in. We're talking about the Frank Condon factor between these vibrational states. So new double prime equals zero means that this first Hermite polynomial represents your initial state. And then new prime equals three. So your final state is represented by this other Hermite polynomial. And then you have to stick the X operator in between them because that's your dipole moment operator and integrate that with respect to DX. And that's your transition dipole. And then the Frank Condon factor is related to that square. So that's basically what you do. Yes? It's the whole wave function, but for this example, it's related to these Hermite polynomials. So anyway, you didn't need to evaluate it. That wasn't part of the issue. I just wanted to write it down. Yes? If you put that it's proportional to the overlap of the two states, that is also fine. Yeah. Okay. So, well, that's annoying. Okay. Technical difficulties. Alright. Let's finish up our discussion of electronic spectroscopy. Alright. So term symbols are necessary for describing the states of these molecules. And I just want to talk about this in a little bit more detail for diatomic molecules because I think some people are confused about it. I think everyone gets the atomic part from last quarter. The people I've talked to seem to have a really good handle on that. But I think for where this comes from for diatomic molecules is a little bit confusing. And at this point, like, for, I don't want to spend a lot of time learning how to generate these for complex molecules. Let's just worry about the diatomic case. And mostly I want you to understand what they mean. So if we have our diatomic molecule, we have values of L and S for the whole thing. And our term symbol looks like this. So we've got the superscript is the spin multiplicity which is 2S plus 1. And here S is the total spin quantum number for the molecule. So to get, you know, to get this we have to sum over all the electrons in the molecule. And then this thing which is going to be sigma pi delta, etc., just like it's SPD for the atomic case, that just tells you about the value of lambda for the molecule. Again, summed over all electrons. And then this thing, the subscript which was J in the atomic case, subscript's called omega. And same thing, you're adding up the Z projections of L and S. And here's a little diagram of that for the molecule. I also posted a PDF of a tutorial on this stuff that I found online that I think might be helpful. So you can check that out if you want to or if you still feel like you need a review of this stuff. Okay, so, and again just terminology, in this particular thing sigma is the projection of S on the internuclear axis. So, same thing as what we were talking about in the atomic case, we had like the total angular momentum and the Z component of the angular momentum. Here we're projecting everything on the internuclear axis, but the idea is the same. So that's where these things are coming from and what they're about. Okay, so let's look at some specific examples of how to build this up. So if we have our general chemistry level molecular orbital diagram, we start with some P orbitals and here we're going to define Z as the internuclear axis and say this is what we get when our PZ orbitals overlap. So we know that we get two molecular orbitals, we get a bonding and an antibonding orbital and now we know that we can describe these as having G and U symmetry based on whether they're even or odd with respect to inversion. And since we started out with the total value of M sub L equals zero and added that up, that's going to give us sigma terms. So we're going to get sigma orbitals out of this. And for sigma terms, we have an additional symmetry descriptor that we need to worry about. So G and U refer to what happens when you go through an inversion. Does it change sign or stay the same? And then we also have plus and minus. And plus and minus refers to what happens when you reflect through a plane containing the internuclear axis. So here's a picture of that. So that is going to be a sigma minus term because when you reflect through that plane it changes sign. Whereas something that looks like this, this bonding molecular orbital, that's going to be sigma plus because it stays the same when you reflect it through that plane. So those are the symmetry descriptors for sigma terms. When we get into things that have larger values of lambda, then some of these things disappear. So we don't have the plus and minus descriptor anymore. But we can still write term symbols for these things. So now let's say we have PX or PY orbitals and they're going to be the same. So we can just look at either PX or PY. Same thing. These can overlap constructively or destructively. We started with two atomic orbitals so we need to get two molecular orbitals at the end and we get a pi and pi star molecular orbital. But now we have M sub L being plus or minus one for the PX and PY orbitals. And again we can describe our pi and pi star molecular orbitals as having g or u symmetry with respect to inversion. And these things give us pi terms. And we need to sum over all electrons to get that. And it's plus or minus one. So hopefully that helps seeing some concrete examples as to what these things mean. Let's talk about Frank Condon factors a little bit more. So we have looked at this mathematically and we've seen how to write down expressions for them. Let's just look at some pictures and see what that looks like graphically. So basically if we have the bonding character of two states being pretty similar. So in this case both of these wave functions look like there's a lot of electron density between the atoms. They have a lot of bonding character. In that case there's going to be a lot of overlap right at the point where the inter nuclear separation is at the equilibrium distance. And we're not going to see a lot of different vibrational lines going on there because there's no reason for the nuclei to change position very much as a result of the electrons popping up to that excited state. So remember we said the mechanism for that is that the electrons change state and then suddenly the nuclei are feeling all kinds of different electronic potentials than they were before because the electron cloud has changed shape and then they start to move around and we see these vibrational progressions. If the states were pretty similar in bonding character to begin with then there's not very much change and we don't see a whole bunch of lines in the spectrum. Whereas if the bonding character of the two states is really different. So in this case we've got the electronic ground state. It doesn't have a node in the middle of it and then it hops up to this excited state where there's a lot of nodes. It does not have very much bonding character in the middle of the molecule. That causes a big change in the shape of the electron cloud and so we see this progression that has a lot of peaks in it. And also the potential is shifted in the x direction relative to the ground state. Okay so we can also look at these things and learn something about dissociation energies. So in some cases we can estimate this really directly. So again g of nu is the energy of this electronic transition expressed in wave numbers and nu is the quantum number here. So we can write this down in terms of the frequency of the transition and x is the correction to the potential for a Morse oscillator which I know we did some practice problems like that in the homework. And so we can look at what happens at nu max. So when nu is maximized here that means we're at the dissociation limit. And so if we maximize this and look at what happens when it equals zero that gives us an expression for nu max. And the value for the energy of the transition when that condition is satisfied tells us about the dissociation energy. And so sometimes you can do that. You can estimate it directly. Another thing you can do is use something like the Burgs-Broner plot which we talked about a little bit. There were some practice problems on that. That's where you plot that frequency versus the separation and take the, you know, formally you should take the area under it. A lot of times you have to extrapolate because you don't see lines going all the way up to the dissociation limit here. So these are the kinds of things that we can get out of electronic spectra. But a lot of what they're actually used for in practical applications are more like things that we saw earlier on when I talked about just some applications. So a big thing that is done with electronic spectroscopy is just Beer's Law. Just looking at, okay, I have some substance that absorbs light and I want to know the concentration of it and you just use Beer's Law to figure out how much of it you have. That's a very common application of electronic spectroscopy. Of course there are a lot of other uses involving learning something about the molecule as we've been talking about here. And an important branch of physical chemistry research is taking these kinds of electronic spectra and using that to find out about the bonding energy of molecules, what kind of bonding is being formed, what do some of these excited states look like. And of course that feeds into a lot of things like in synthetic chemistry, learning about how the symmetry of different excited states affects what kinds of molecules you can make. And before we finish this off I want to talk about one more application. So so far we've mostly been talking about electronic spectroscopy in the UV invisible range. So that has to do with valence electrons being promoted. They're relatively low energy transitions as these things go. Of course they're higher energy than the vibrational and rotational transitions but we're mostly talking about valence electrons jumping up and down. If we want to learn more about the bonding structure of the molecule or atom we can do photoelectron spectroscopy. And so what we're doing here is it's a brute force approach. So instead of sweeping the frequency and looking at where things absorb or emit we are just bombarding the sample with high energy photons at a fixed wavelength. This is often in the x-ray. So the idea is we have plenty of energy available to ionize all kinds of electrons even deep within the core of the molecule. And then we can measure the kinetic energy of the electrons that are detected. And here's a schematic of how that works. So we have a beam of here it's shown as atoms. Could be molecules depends on what you're trying to measure. And that is being blasted with high energy photons. So again a lot of times this is x-rays it's done at synchrotrons pretty often. And then the electrons get ejected out and they are placed in an electric field and so they bend. So the faster ones are over here the slower ones are over there. And so you can measure the kinetic energy of the electrons that come out of the sample. And that tells us something about the bonding energy because of course the amount of energy that it takes to dissociate those electrons is related to the energy of that bond. And so that can teach us about the bonding structure. So here are some typical values. So this is our energy in megajoules per mole. So it takes a lot of energy to knock these things off. And as we get into, so if we look at boron here the valence electrons are relatively easy to knock off. And then it starts to get a little bit harder. But then when we get down to that 1s shell it takes a lot of energy to ionize these electrons. So it's as you'd expect the electrons that are closest to the nucleus are most tightly bound. But we do have plenty of energy to ionize all of them. And so when you do this and look at all the peaks that you get it tells you something about the bonding in the, it tells you something about the electronic structure of the atom or molecule. Okay so in this case where we're just looking at atoms you might think it's really boring because surely somebody has measured all of the you know the ionization energies for different electronic states in common atoms. And that's true but you can use that to your advantage. So one of the primary uses for X-ray photoelectron spectroscopy is looking at a surface and figuring out what kinds of atoms are on that surface. So since these things are well known there are tables of what these energies look like you can find you know really low levels of different kinds of things on the surface and learn about you know what that looks like. You can also do this for molecules. So here's one for N2. So again here's our general chemistry molecular orbital level, molecular orbital diagram for N2 which is a reasonable description of the bonding. Okay so as we saw before when we were kind of talking about this in a theoretical sense. You know so here there are three bands in the spectrum. So A is what we get when we remove a weakly bonding electron. So that's the 2p sigma g orbital. And that transition has relatively few lines so ionizing that you know it tells us that hopping up from the ground state to that state doesn't change the inter nuclear separation very much. Whereas B here is removing a strongly bonding electron that's the pi u, it's from the pi u molecular orbital. So that's down here. That requires a lot more deviation in the inter nuclear distance and so we see a bunch more vibrational lines. And then C comes from removing a weakly anti-bonding electron and you know here we have a weaker transition and we only see one peak. So it's a relatively short progression. So just to give you an idea of how these things work and how that's used. Okay so we don't have a lot of time left but I do want to start talking about x-ray crystallography a little bit and we'll finish that part up next time. So just to give you an idea of where we're going with this, there's a whole chapter on solids. It contains a bunch of stuff about crystallography. I think it's chapter 9. We mostly skipped it. It might be useful to go and skim that and have a review of things like the difference between crystalline and amorphous solids. So you know if something is in a crystal it's in a really regular repeating lattice. If it's amorphous it's still a solid but it's a lot more disordered. A lot of the stuff that's in that chapter is pretty descriptive and there's not a lot that we can really do with it. So it's useful to look at it but we're not going to spend a lot of time on it. What I want to talk about is crystallography as an interaction between a periodic lattice of your molecules in the crystal and x-rays and of course this happens because we can have constructive and destructive interference between the wave function of the electrons and the incoming x-ray photons. And crystallography is not spectroscopy. We're just looking at x-rays diffracting off the wave function of the electrons but it's related to a lot of the other stuff we're doing because it involves these ideas of symmetry. So instead of point groups when we start talking about crystal lattice we need to assign things to space groups and anybody who's in a crystallography lab or has solved a crystal structure of an organic molecule has seen some of these things. It's kind of the next level of symmetry arguments that we're talking about. And the periodic structure of the crystal is what enables it to diffract x-rays. So how many of you have been involved in solving a crystal structure in some way, either in research or in labs? Okay, so a few. But how about crystallizing stuff in organic chemistry lab to purify it? Has everybody seen some of that? Okay. Yeah, so we're able to get, so crystallizing your compound is a good purification method because you make this really regular lattice where molecules that are the wrong shape don't fit in there. And then we end up with this periodic function that can actually diffract x-rays. And so that happens because electrons have some wave character. They can interact constructively and destructively with the photons. And so you get something that looks like this. So we shoot the x-ray beam at your crystallized molecule and the crystal in this illustration looks pretty messy but in general you need a very nice crystal in order to get diffraction. And then the x-rays that get scattered off the molecule are then detected and you get a regular pattern of spots that has something to do with the crystal lattice. In fact it gives you the inverse of the dimensions of the crystal lattice in an indirect kind of way. And then that enables you to get an idea of what the unit cell looks like for a molecule. So here's one from a crystal structure of Rudd-Obson. So we talked about Rudd-Obson when we were talking about how our eyes work and how we have to be careful about calibrating our instruments. So we understand Rudd-Obson because it's been crystallized. There have also been a bunch of NMR structures of different kinds of Rudd-Obson. But so here's what the unit cell of this looks like. And again if you want to review what unit cells of crystals look like, go check out, I think it's chapter 9. There are a bunch of simple examples in here. This is a complicated one but it obeys the same principles. So here A, B and C are the dimensions of the unit cell. So that's our repeating unit. So this is the origin here. We're looking down the A axis and then B and C are shown here. Here are just some examples of diffraction patterns that can be used to solve molecular structures. So this is the original fiber diffraction pattern of the DNA double helix. My DNA picture didn't show up here but hopefully everybody knows what it looks like. So we see these regular repeating units in the case of the DNA. We've got patterns here and then reflections here that's reflecting the repeating pattern of the DNA. Of course in that case it's a fiber so it's only crystalline in one dimension. Whereas if we have a three dimensional crystal we see these really regular patterns of spots which we can then analyze and use to get the molecular structure. And I'm going to quit there for today because we're out of time but next time we're going to talk about this mysterious process by which that happens. See you on Friday.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 15. Molecular Structure & Statistical Mechanics -- Electronic Spectroscopy -- Part 4. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:01:42 Quiz 3: Electronic Transition 0:09:25 Diatomic Molecular Term Symbols 0:15:19 Frank-Condon Factors: Diagram 0:17:40 Dissociation Energies 0:20:57 Photoelectron Spectroscopy 0:27:38 X-Ray Crystallography
10.5446/18921 (DOI)
Okay, so let's continue our discussion of electronic spectroscopy. So last time we talked about the basics, what it is, what happens to electronic states, most of which do not for us or do anything interesting on the way back down to the ground state. I also want to talk a little bit about some applications and things that are interesting. One of the things that's fun about PECAM is that there's really a lot that we can go into that's beyond what's in your book and beyond the things that we can do in class and I just want to mention a few of these things. So we can take advantage of the interaction between electronic and vibrational states in resonance Raman. So remember we've talked multiple times about the idea that in using these different kinds of spectroscopy we get a lot of interplay between different kinds of excited states. So for example, in vibrational spectroscopy we see all of the rotational states were exciting something to an, you know, to an upper vibrational state. The rotations get excited. Similarly, in electronic spectroscopy a lot of times we see the vibrational states and sometimes the rotational states too if we have enough resolution. We can also use electronic states to enhance the intensity of particular Raman modes. So we talked about how Raman spectroscopy is useful in terms of looking at vibrational modes in molecules. But one disadvantage that it has is that the signal is very weak compared to direct IR spectroscopy. So when we're looking at scattered light, you know, the most of the light just scatters straight off in the Rayleigh line. It doesn't gain or lose energy and you just see the same wavelength of light that you put in. Resonance Raman deals with that problem. It's enhanced and it also deals with the problem of having a very complex molecule that has a lot of vibrational modes. So if we think about, you know, instead of having these simple molecules that we've been looking at, what if you have a protein? So imagine, you know, this huge molecule, there are all kinds of vibrational modes going on. It's going to be much too hard to understand the spectrum of that. You're going to have all these peaks all on top of each other and there's not really a good way to interpret it. So resonance Raman deals with both of these problems at once. And we do that by putting in your, so we're exciting vibrational states that are associated with a particular electronic state. So if we're on resonance with a particular electronic excited state, then vibrational modes that are associated with that particular state are going to be enhanced by a lot. And so that means that not only do we see a larger signal for the vibrational modes that we're interested in in the protein, it also means that these modes are amped up enough such that all the ones for the rest of the protein that we're not interested in are not visible. They just fade into the background. So here's an application of that. This is something that's from Judy Kim's lab at UC San Diego. So she's interested in looking at electrostatic potentials of azurine. Azurine is an enzyme that deals with electron transfer. And one of the things that Professor Kim has done is she's looked at tryptophan. And the side chain of tryptophan is involved in the reaction in this case. She can make radicals on this particular side chain. And look at the difference between tryptophan in an environment where it's exposed to the solvent and where it's not. And it turns out that these look very different and that has some implications for how the enzyme functions. And you can see the difference between these two things in looking at the resonance Raman spectrum. So being able to have these vibrational modes enhanced only near this particular electronic state, in this case, in a rare amino acid where you don't have very many of them in the protein. In fact, I think azurine has two, one that's in the solvent and one that's not. That enables you to learn some very specific information about this molecule and where otherwise it would be very complicated. So what's your solvent? The, probably water. So it's, well, definitely water in this case. So the protein is soluble, but of course the inside of it is a hydrophobic environment. So the tryptophan that's just surrounded by the rest of the protein is going to have a different dielectric constant in a different local environment than one that's closer to the surface that can interact with the water. Okay, so that's just a hint as to where we can go with some of this stuff and how electronic spectroscopy can interact with vibrational spectroscopy. Let's talk about fluorescence. So last time we talked about what happens to excited states. A lot of them just relax back and produce heat. It's not so much fun. Some molecules dissociate and fall apart. You know, then they do photochemistry. But other ones undergo fluorescence and phosphorescence and this is pretty interesting. So here's an example from my lab. These are fluorescent bacteria that we found at some point a couple of years ago. And we still don't know what molecule makes these things fluorescent, but they look neat and they illustrate what we're talking about. So here the excitation wavelength has, it's in right on the edge of the UV. Like we can see some purple light. And then the fluorescence is a little bit bluer. And we can see that here if we look at the spectrum. So the excitation wavelength is shorter. It's closer to the UV. And then here's that emission wavelength. Here's some other examples. So these are quantum dots of various sizes that are fluorescent. If you radiate them with UV, they shine in the visible. And depending on the size of the quantum dots, we see different colors. So do you guys know about quantum dots? Is that something that comes up in chemi? Okay. They're really interesting. So these are little nanoparticles of a semiconductor material. So in some cases, you know, cadmium telluride, cadmium sulfide is another good quantum dot material. And their excitons are confined in three dimensions. So it behaves as kind of an intermediate between a, like a giant molecule and a bulk semiconductor. And they're useful for all kinds of things. Mainly just detecting stuff. I mean, they're interesting from a fundamental physics and chemistry perspective, just trying to understand how they work. But they're also used in imaging applications. We can irradiate them with UV light and then see their fluorescence. And here's an interesting application of this. These green quantum dots in this picture, which are being used for cellular imaging, were actually biosynthesized by earthworms. So this group discovered that you can feed earthworms soil contaminated with cadmium and tellurium. And they poop out quantum dots that are monodispersed and have this particular size that are green. In this picture, the blue is a dye that stains the nuclei of the cells. And so this is just proving that they can put these quantum dots that were made by the earthworms into actual cells and use them for imaging. So why is that useful? I mean, it's funny, but it's actually interesting as well because one of the problems with quantum dots in biological applications is that these things are really toxic. So it's actually pretty amazing that earthworms can eat soil that's contaminated with cadmium and tellurium at all and not dye. But what they do is they have these enzymes called metallothionines that bind these toxic metals and package them into these little quantum dots that then are coated on the surface with something that makes them soluble and harmless biologically. And I do not know the details of what the coating is. This has been done previously in yeast, but this was the first example of it being done with in large quantities with something like an earthworm. So anyway, it's useful to be able to coat the quantum dots with something that's biocompatible such that you could use them in applications like cellular imaging. Another fluorescence application that I'd like to talk about is green fluorescent protein. So you've probably seen this in different places. It was the subject of the Nobel Prize in Chemistry in 2008. Green fluorescent protein has turned out to be useful for all kinds of biological experiments because you can tag it on as a fusion protein with other proteins and have it act as a signal inside the cell. So fusion, making a fusion protein means that you tag the GFP onto the gene for some other protein that you're interested in. And then when the target protein is expressed, the GFP will be expressed too and it glows in vivo. And so this has been used to make all kinds of important things like glowing green mice. And has anybody seen the glowing zebrafish, the glowfish? They're illegal in California, unfortunately, but you can buy fluorescent green fish in other places. And again, this seems kind of silly. You can draw little landscapes with bacteria expressing different variants of GFP that fluoresce in different colors. But it's fantastically useful in chemistry and biology because it enables people to look to make multiplexed assays with different colors and look at where different proteins are occurring using this marker. So let's talk about how it works. So here's the protein. It has this beta-barrel structure, so it's like a can, that it's holding the chromophore inside. And we're going to talk about what the chromophore is in detail in a minute. And this is important because, again, the GFP chromophore has a specific chemical structure, but it also has to have this low dielectric constant environment to work. It has to be stuck inside the protein. If you just take it out and put it in water, it doesn't fluoresce. And so, you know, we're able to use things like fluorescence microscopy to see just a lot of detail about what's going on in cells. So these are two examples of a fish eye and some squid epithelial tissue where different fluorescent dye is being used in the microscopy. So in this case, the green is GFP. In this case, it's another dye. But fluorescent tagging of different kinds, whether it's expression of a protein or if it's binding of a fluorescent molecule, is used all over the place to see what's happening in vivo and keep track of reactions. Okay, so here's what the GFP fluorophore is. So it's got these three amino acid side chains. There's a tyrosine, a glycine, and a serine that undergo this reaction. So this thing is covalently attached to the protein. It's all inside that giant beta barrel. And when the GFP is expressed, it doesn't fluoresce right away. So when it comes off the ribosome, it's, the protein is not matured. That's what the term is. And it takes some time for this reaction to happen. These residues are arranged spatially in just the right place so that they can cyclize and lose water and then get oxidized and make this chromophore that then fluoresces. But another thing that's really interesting about it is that, you know, as we were talking about with the retinol inside lens proteins, the wavelength at which this thing interacts with light, you know, both in terms of the absorption and the fluorescence that you see, can be shifted by changing the local protein environment around it. And so people have been able to mutate this protein and, you know, partially by trial and error, partially by using things like molecular dynamic simulations to figure out which parts of the protein are important for doing this. And they've been able to generate all kinds of different colors of GFP variants. And that enables people to use these multiplexed assays in different ways. And so here's the sort of what the chromophores look like for slightly different variants of GFP. OK, so those are some of the ways in which fluorescence is useful. And you can see what some of the chromophores look like. Again, we have a lot of flat molecules where you can imagine they don't have so many degrees of freedom to move around and they're, you know, they're trapped in this rigid protein structure. And so emitting a photon is what happens. We can describe a lot of the things that happen with excited states using a Droblonsky diagram like this one. OK, so we start out in the ground state down here. It's called S naught. And we're going to get into what S and T are in a minute. And then we can have various things that can happen. So if the system absorbs a photon, so that's this purple line here, and it goes up to one of these excited states, in this case S1, then there are various pathways available to it. So this dotted black line is non-radiative decay. So that's just, it falls back down from that excited state without emitting a photon. And we don't see that and it's not so interesting. So that means if we're measuring the absorbance, like if we're using a spectrophotometer and measuring the absorbance with Beer's law, we'll see that it absorbs some light. We can still observe that. But if we're looking for fluorescence and that's the pathway that happens, we're not going to see anything. Now instead, if the electron then crosses over into this other state and falls back down from there, we can also undergo non-radiative decay from that state. Again, not so interesting. We can still see that there's an absorption, but the emission doesn't look like anything. And in this case, the state multiplicity here matters. So the S's are singlet states and the T is a triplet state. And this should be ringing some bells from writing down term symbols from last quarter. And if it's not, don't worry too much because we're going to do a little review of it. I think, you know, it might be good to go over it again. But so the spin multiplicity of these states is important. So now when we get into fluorescence and phosphorescence, these are the transitions where when we fall back down from that excited state, a photon is emitted. And we already said before that in fluorescence that happens very quickly and phosphorescence is much slower. So what's going on there is that if we have direct fluorescence, that is a spontaneous emission of radiation, so of a photon, from an excited state that has the same multiplicity as the ground state. So we started out in a singlet state, jumped up to a singlet state, and then emitted a photon. Phosphorescence is what you get if there's intersystem crossing. So this, you know, notice that the potentials of these upper states overlap with each other. And so sometimes the electron can jump over here to this other state and that's called intersystem crossing. And then the electron falls back down from there. That's phosphorescence and it's usually slow. And so this diagram is useful because it tells us about all of these processes that can go on and helps us map out where the states are. So when we go to look at the spectra, they're going to be pretty complicated because there are a lot of things going on. And this diagram helps us map out what they all mean. So let's see, what else did I want to say about this? I also wanted to point out that this axis down here is internuclear distance. And, you know, remember we're making the assumption that whatever the electrons do is fast relative to the nuclei. So if we jump up to a particular excited state, what's going to happen is, you know, the nuclei start out at their, you know, probably the equilibrium position as far as separation. So they're vibrating but, you know, on average they're going to start out from the equilibrium state. Then we get up to some excited state. And in that excited state, the optimal distance there might be different. So, you know, what's happening is the electron gets excited. The charge distribution is now really different. You know, the shape of the orbital or the state that the electron is in has changed. And so the nuclei are going to start feeling that potential. And then that's going to induce vibrations. They're going to start moving around. And then, you know, that's going to induce vibrations. And we'll see what happens there. Okay. So this is another version of that diagram. So I like this one better as far as being able to see what's going on. I put this one in here too just because it has a lot of details as far as the time scales of what happens. So, you know, here we're pointing out that the excitation or the absorption is happening really, really fast. So 10 to the minus 15 seconds. That's, you know, that's a very fast process. And then we have, you know, the internal conversion and vibrational relaxation, you know, compared to fluorescence. So fluorescence here is on the order of, you know, nanoseconds or so. Whereas phosphorescence happens over a much longer period of time. So again, this picture is a little bit confusing. The other one's better, but it does give you a lot of details about what actually happens. Okay. So, you know, here notice we're talking about singlet states in all the cases. So I think in order to discuss this in a little bit more detail and talk about the selection rules, we should do a really quick review of term symbols. What do you think? You did see them last quarter, yes? Yes, but maybe it's a little, it didn't quite sink in perfectly. If it did, sorry about that, but otherwise it's good to have a review before we use it. Okay. So we're going to go back to general chemistry for a minute. So we write electron configurations for atoms in the periodic table using the alpha rules, which I'm not going to state because I think everybody remembers these. But the deal is that these only describe the ground state electron configuration. They don't tell us anything about excited states. And, you know, here we're talking about electronic spectroscopy. We're really worried about what's going on with the excited states. And worse than that, they're ambiguous. There are often different ways to arrange the electrons in these configurations in terms of what their spin is, you know, specifically what orbital they're in. And in general chemistry, we didn't worry about, you know, if there was one electron in a p orbital, we didn't worry about whether it was in a px or a py or pz orbital because we're assuming that they're all degenerate. And for a free atom, that's true. But for chemically interesting systems, a lot of times it's not. So we've looked at what happens in different point groups, you know, depending on the local environment of the atom, a lot of times those orbitals are not degenerate with each other because of how the symmetry of the molecule works. So the problem here as far as the ambiguity is that the standard electron configurations don't specify the values of M sub L and M sub S for particular electron. So if we have the ground state of boron, you know, we fill in these electrons, then we haven't specified which p orbital that last electron is going into. And again, if you're just talking about a boron atom out in vacuum, you don't care. But for some of these applications, it might matter. And so we need to be able to write down the term symbol. I also want to point out that we're not saying whether it's spin up or spin down. So that's the value of M sub S. And so this electron configuration is really ambiguous. There are a bunch of different things going on. So the term symbol enables us to distinguish between electron microstates. So the microstate is just the specifics of exactly which orbital is it in, is it spin up or spin down. And these things are characterized by the value of the orbital angular momentum, which is L. And that takes the values 0, 1, 2, 3, 4, et cetera. And these are labeled S, P, D, F, G just like in the atomic orbitals, except we're using capital letters here. But so that's just a way of telling the orbital angular momentum. And then to get that value for the whole atom, you have to sum over all of the electrons. And then there's also an S term, the spin angular momentum, which is summed over all of the electrons again. And that goes in increments of 1 half because electrons are spin 1 half. And 2S plus 1 turned out to be an important quantity in the term symbol. That's the spin multiplicity. So when we're talking about singlet and triplet states, that's what we're talking about there. And we also have to worry about the total angular momentum, which is called J. And that is equal to L plus S. And so here's what your term symbol looks like. So we have our value for the orbital angular momentum. The multiplicity is 2S plus 1, and that's written as a superscript in front of the L. And then our total angular momentum is written as a subscript. Again, this should look familiar, but you know, maybe if you didn't use it for anything right away, it's nice to have a little review. Okay, so when we talk about the term symbols for atoms, we need to remember that we've got a Z component of the angular momentum, which is a scalar, and then the overall angular momentum is a vector. And so we can sum both of these things over all the electrons in the atom. So LZ is what we get when we sum up all of the M sub L values. And SZ is what we get for summing up all of the M sub S values. Oops. So these are the options that that can take. So let's look at how to write these. We'll start with an easy example, and then we'll do a harder one. Okay, so for helium atom, its electron configuration is 1S2. So what do you think? Is that one ambiguous or is it pretty good? It's good, right? We've got two atoms, we've got two electrons in that S orbital. They're paired. There's nothing else they can do. This one's actually really well specified. So it's relatively easy to pay attention to all the microstates and write them down. So let's do that for an example where we know it's easy. Okay, so M sub L for the first electron, I'm calling them 1 and 2 just for the sake of labeling. Of course we can't tell them apart because they're electrons, but we're here, we're labeling them. So if we have the first one as spin up, you know, we know M sub L is zero because they're in an S orbital. And if one of them is spin up, the other one has to be spin down. And so if we add up the total values for M sub L and M sub S, we get zero for both. And then we have 2L plus 1 values of big M sub L. But in this case, the only value that's possible is zero. And so we can deduce that L equals zero. And we know that M sub S goes from plus S to minus S in increments of 1, 2S plus 1 values. And the only M sub S value we have here is zero. And so S equals zero also. And then we can also sum these things together to get J. And we also get that J equals zero. So again, this is an easy example, but we're just going through how to set up the microstates and then deduce the relevant values. And so what we get out of this is the term symbol is single at S zero. So we have our spin multiplicity out in front of it. The S tells us the value of the orbital angular momentum is zero. And then we have our J value. And so this single at state is what you get for anything that has a closed sub-shell. That's what it's going to look like. Okay, so now let's do a harder example. That one was very simple. Let's look at the carbon atom. So in this case, we have a lot more microstates that we have to worry about. There are more possibilities for the electrons to adopt different confirmations. And we have two filled sub-shells. We have 1S2 and 2S2. Those are going to have the single at S zero term symbol. Any filled sub-shell has it. So we can just write that down. Okay, so now we have to deal with the 2P electrons. And for that, we have six possible spin orbitals. So a spin orbital is the combination of which orbital is the electron in and what's its spin? Is it spin up or spin down? And so given that we have six possible spin orbitals, that gives us 15 microstates. And so how I got that is there are six places to put the first electron. So it can be in each of the three P orbitals and it can either be up or down. And then electrons can't occupy the same state in the same atom. And so when I go to put the second one somewhere, one of those configurations is already used up. And so that gives me five options for the second one. But then the electrons are indistinguishable. So only half of those microstates are unique. So that's how we end up with 15. So let's figure out what they are. This means that, you know, we're going to go through and make a table of the microstates. And we're going to see what the term symbols are that we get from this. We're going to get a bunch of term symbols corresponding to possible configurations of these electrons. All right, so what we've got is we can have each electron in being either spin up or spin down. So alpha is spin up. That means M sub S is plus one half. Beta is spin down. M sub S is minus one half. And we can stick each electron in any of the three P orbitals and it can be either up or down. And so we're going to make a table of the possible microstates that this thing can occupy if we just have these two electrons. And so the notation here is just, you know, I'm saying we've got the first electron, you know, in with its, you know, we've got this M sub L value and, you know, pluses up and minuses down. Let's do a couple of examples here. So we've got one plus one plus. Of course, we're going to see that that doesn't turn out to be a real possibility because we can't have the electrons in the same state. They can't have the same set of quantum numbers. And so for M sub L equals two, we're going to see that the only possibility is the one in the middle. If we go down and see and look for ways to come up with big M sub L equals one, we can have zero plus one plus, one plus zero minus, one minus zero plus. There are different ways to add up to that value of M sub L. For M sub L equals zero, there are even more different ways to add this up. So we've got options for M sub L equals plus one, zero and minus one, and there are different ways to add up your microstates to get that value. And so we can go through and make a table of all the possible microstates that you can get. And then what we're going to see, and if you don't get a chance to scribble all this down, don't worry about it, it's all going to be there and hopefully this is review. We're going to see some of these microstates violate the poly exclusion principle. So one plus, one plus means you have M sub L for electron one equals one, and M sub S for that one equals one half, and M sub L for electron two equals one, and this one also has plus one half, and that's not allowed. It violates the poly exclusion principle. So I started by just writing down all the possibilities that you could have, but this one doesn't work, and we can see the same thing for one minus, one minus. And so, you know, we can go through and can all of these microstates that are forbidden by the exclusion principle. And we see that we get 15 microstates left as we expect. So, you know, those are just written down for completeness, you know, going through all the possible microstates that could exist, but those do not work out. So let's work with the ones that are left. And so now what we're going to do is go through and find the values of big M sub L and big M sub S that we have and deduce our L, S, and J values. And so, the largest value of M sub L is two, and that happens when M sub S equals zero. So, L equals two and S equals zero for this particular term symbol. So we can write that down as some kind of a singlet D state. So we still haven't found J, but we can save that for later. And so, if L equals two, then we have values of big M sub L as two, one, zero, minus one, and minus two. And so now we need to account for all of the microstates corresponding to those from this table. So we want to cross out which one microstate from each row of the middle column. So don't get confused between looking at, you know, things that violate the exclusion principle and things that we're crossing out because we've accounted for it in the term symbol. So we're just saying this microstate is accounted for by these particular values of L and S. They belong to the singlet D state. So we have one in this row, and we're going to cross out one from each row. Why did I pick those particular ones? It's arbitrary. We just want to know how many of these microstates belong to that particular term symbol. Okay, so that accounts for that particular term symbol. We still have a bunch of microstates left. So the next value of M sub L equals one is M sub L equals one. And that happens when we have values of M sub S that are plus one, zero, and minus one. So now we need to account for those microstates. So L equals one, our spin multiplicity is now three. So this is a triplet P state. And so we need to account for the microstates that correspond to that. And again, we're crossing out one from each place in an arbitrary manner, and that gets rid of nine of them. And so now all we have left is M sub L equals zero, M sub S equals zero, and that gives us a singlet S state. So we know which term symbols are available from this carbon atom. And we just have to find the subscripts for all of them. So we have a singlet D state. We know that M sub S equals zero. And so here are possible values for M sub J. We've got, you know, from two to minus two in increments of one. And so that means that J has to be two. And so here's our final term symbol for that. And we also know that the degeneracy is two J plus one, so that's five. And then we can move on to the triplet P states that we found. And we have all of these values for M sub J. And so we can see that J equals two for that one also. You know, so we have these. So again, here we have to be careful because we have more than one set of J values. So we have one set of, you know, two, one, zero, minus one, minus two. That corresponds to J equals two. But then we also have a set corresponding to J equals one. We have one, zero, and minus one in there. And then we also have an extra J equals zero leftover. And so we get three of these triplet states with different values of J. And then the last thing we have left over is this singlet state that's just singlet S zero. And so if you have an electron configuration ending in NP2, these are the term symbols that we end up with from that. Okay, so how do you know which one of these is the ground state? So, Hun's rule tells us that the state with the largest value of S is the most stable. If you have states that have the same value of S, then the higher L value is more stable. And then if these are the same, then which one is more stable depends on whether the sub-shell is more or less than half full. And so for this particular set of term symbols, the triplet P zero is the ground state. Okay, so that is just a review of term symbols. That is a different way of doing it from how it is in your book. If you like how it is in your book better, that's completely fine. If you like this way better, that's good too. Question? What's NP2? NP2 is the end of an electron configuration. So here we said 1S2, 2S2, 2P2. You know, if you had any electron configuration, you would get a bunch of singlet S zero states for the closed shells, and then this would give you the valence electron. So one thing that's nice about term symbols is that, you know, once you figure out how to do it, there's really a limited number of options for electron configurations. Okay, so what we're really interested in for stuff that we're going to do this quarter is term symbols for linear molecules. So, you know, we just went over the atomic ones just as a review so hopefully you remember what they look like. But for the types of things that we're going to do in terms of talking about selection rules and electronic transitions, we're interested in the ones for linear molecules. So here's what they look like. Basically, we're just, we're using Greek letters instead of English letters for a lot of the terms in the term symbol. But here, S is the total spin quantum number. And, you know, this, which was L before, it's capital lambda here, that's the orbital angular momentum along the internuclear axis. So that's, you know, it's a linear molecule so it's along the bond. Omega here is the total angular momentum. The total angular momentum as opposed to the orbital angular momentum along the internuclear axis. And G or U is the parity. And that's with respect to reflection through an inversion center. And then plus and minus is the reflection symmetry along a plane that contains the internuclear axis. And we probably need to look at some pictures for this to make a lot of sense. So G and U are, they're from German. They're, the words are Gerada and Ungerada. And that means even an odd basically. So maybe that'll help you remember, I don't know. A wave function is G or even if it doesn't change under inversion and it's odd if it does. So G and U just describes if we invert this linear molecule does its wave function change sign or not. And in centrosymmetric environments like a linear molecule, anything that has an inversion center, transitions between a G and a G or a U and a U are forbidden. And we should, it should be clear why this is from the even odd rule. If we have an even times an even or an odd times an odd and then we stick the odd operator in between, we're going to end up with an odd function in an environment like this where there's an inversion center. And that's called LePort's rule. And again, it only works if your molecule has an inversion center. So if you have states of the same, you know, two G's or two U's, then that's a forbidden transition and otherwise it's allowed. And I think that is what we're going to say about term symbols for right now. If you need to review how to do atomic term symbols, please do. We're not going to spend a lot of time on it. It's not something that I'm really going to test you on right now. We just reviewed it in order to understand what the ones for linear molecules look like. And next time we're going to talk about selection rules and actual transitions. Have a nice weekend.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 13. Molecular Structure & Statistical Mechanics -- Electronic Spectroscopy -- Part 2. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:00:48 Resonance Raman 0:05:41 Fluorescent Bacteria 0:12:11 GFP Fluorophore 0:14:27 Jablonski Diagram 0:19:19 Fluorescence and Phosphorescence 0:20:47 Aufbau Rules 0:37:56 Find J(subscript) 0:39:53 Hund's Rule 0:41:19 Term Symbols for Linear Molecules
10.5446/18917 (DOI)
So today we're going to finish up our discussion of examples of vibrations in polyatomic molecules and how we can figure out which ones are IR and Raman active by symmetry. And then we're going to move on to some cases of more simplified molecules. We're going to talk about linear molecules where we can calculate using some pretty easy calculations what the bond length and the force constant and things like that are for the bonds involved. Of course, we can do that for polyatomic molecules but we need a computer. So in the discussion of which vibrations are IR and Raman active, one of the things that came up last time is the fact that we don't know anything quantitative about these vibrations from that analysis. We know whether we can see them in the spectrum but that doesn't tell us what energy they show up at and it doesn't tell us anything about how intense they are. For that, we need to know some additional information. We can't get it just from group theory. So we will get into that today for the cases of diatomic molecules. Then next time on Wednesday we're going to go over selection rules in a little bit more detail. We've already talked about this in a qualitative way. We're going to look at how you actually calculate it. And then that's going to be it for the first midterm. So let's see if we can get all that done. Does anybody have any questions before we start? Yes? The material for the midterm does not cut off, everything through, you know, what we do on Wednesday is going to be on it. Okay. So let's continue talking about bond vibrations. All right. So we've been doing this example of methane which, as I said, is going to be a little bit harder than probably what you're actually going to have to do on the exam. So if you get this, you're in good shape. Okay. So last time we talked about the bonding, now we're going to talk about its vibrations. And so the question is identify the vibrational modes and determine whether they're IR or Raman active. And so of course we're going to use this doing, we're going to do this using group theory. And so as always when we talk about molecular motion, our basis is now a little coordinate system on each atom. And so we need to look at how these things transform when we move the molecule around. Okay. So methane has four atoms and we have to take them all into account because we're talking about these vibrations. So remember our basis that we set up always has something to do with the symmetry of what we're actually looking at. And so here what we're interested in is the displacement of the atoms relative to each other. So we need our little coordinate system on each atom. And so there are 15 little unit vectors in our basis here. And so if you want to set up the actual matrices for this, you need 15 by 15 matrices. So let's not do that. Let's use the shortcut to get the character. So again I hope, you know, we looked at the water example, you get something that's 9 by 9. It's useful to do that once, you know, for one operation and just prove to yourself how it works. And then you're done. Once you've seen it, that's good. Okay, so for the identity operation, nothing changes, of course, and so we get 15 for the character of that. So of course our identity matrix just looks like a 15 by 15 matrix with 1's on the diagonals and 0's everywhere else. Now let's look at what happens when we look at the C3 rotations. So this is where it starts to get a little bit more complicated because when we look at a C3 rotation of methane, our basis is not just the individual bonds anymore like it was when we were looking at the bonding. You know, there we could use the shortcut and it was simple enough because we were talking about just the bond swapping places. So here we have these little x, y, and z axes on each atom and when you do 120 degree rotation, x doesn't map onto y directly. You end up with things that are in between. And so let's consider this, you know, I'm not going to set up the whole giant matrix, but let's consider this sort of, you know, piece by piece. So we're thinking about this, you know, around our, as our C3 rotation, we're holding the molecule by one of the h's and rotating it around. So let's look at the carbon and the h that's on top. So for that, you know, these two are going to behave the same and we have to fill in our whole rotation matrix as we learned before in order to get that. And so you plug in the actual angle that we're rotating through and you get that, the trace of that is zero. And you can do this for the other hydrogens as well, you know, everything changes places in space and those contribute zero to the trace also. And this is something that if you're not sure about the rotation matrices, it's a good thing to practice. Also, I would recommend that the general form of how you make rotation matrices, if you don't remember, that might be a good thing to put on your cheat sheet that you have to write down. It's a useful thing to know how to do. Okay, so again, the reason that we can't use the standard shortcut in this case is that for the particular operation that we're doing, the elements in our bases don't map onto each other, you know, in a one to one fashion. So we have to break it down like this. What we find is that the character of C3 is zero. Okay, so now let's look at C2. So again, remember for the methane molecule, the C2 axes are going between the hydrogen carbon bonds. So it's so, you know, before we were holding the molecule by one hydrogen, now we have to hold it with two of them sticking up and one is coming out at you and one is pointing at me. So if we look at our C2 axes there, then we have to treat the carbon and the hydrogen separately. So all of our hydrogen switch position when we do that, so those contribute zero to the character. And on the carbon atom, Z is going to stay the same and X and Y contribute minus one. And so if we add all those up, we get minus one for C2. Okay, how about S4? So remember this one's hard to visualize in methane. We hold the molecule by the top hydrogen again so that we have the other three hydrogen sticking out like this, rotate 90 degrees and then reflect through a plane perpendicular to that rotation axis. And when you do that, you see that the Z axis on the carbon changes sign and everything else gives you zero. So we get minus one for S4. And again, if you have trouble visualizing that, you know, go check out the molecular models and make sure that you can prove it to yourself. Okay, so the last thing that we have to deal with is our dihedral planes. So on these vertical planes, we have to look at what each atom is doing. So for the carbon, X and Z stay the same, Y changes sign. And the same thing happens for the hydrogens that are in plane. And then the other ones get reflected. And so we have to add up all these contributions to get the character. And when we do that, we end up with three. So now we have a reducible representation for the motions of methane. So remember, we're not just looking at the vibrations yet. We have this basis tells us something about the displacement of all the atoms. And so we have to reduce our reducible representation. And we use the reduction formula to do that. And here's what we end up with. So we get A1 plus E plus T1 plus 3 T2. And we had 15 vectors in our basis. So we should end up with 15 elements here. And it might look like we don't have enough until you remember that E is doubly degenerate. So that counts as 2. And T is triply degenerate. So every T counts as three symmetry species in that representation. Okay. So after we reduce our representation, then we have to take out the symmetry species that correspond to translations and rotations because those don't show up in our vibrational spectra. And so if we look at the character table, we see that for translations, that's going to take care of one of the degenerate sets of T2. So there's an x, y, and z in there. So translation of this molecule along any of the axes corresponds to that. So that removes one of them from consideration. Okay. So how about rotations? So rx, ry, and rz are going to be all degenerate. And those fall into the T1 category. And again, we're just reading this right off the character table. So the vibrations are everything that's left over. So we have A1, E, and 2 T2. And so you can look at the character table and decide which ones are IR and Raman active. Okay. So let's look at what the actual vibrational modes are for methane. So they're a little bit hard to visualize, but this is a good picture of what's going on. So we have this wag where, you know, some of the bonds are moving with respect to the other ones. There's a twist. The symmetric stretch is easy to see. That's the one where all the bonds are flexing in and out. And then there's the scissors motion. So if we try to put this in terms of our answer that we got, we see that the one that's A1 is inactive. There's no component of the dipole moment and no component of the polarizability that's corresponding to that. And that's fine. It just means that we can't see it. That vibrational mode doesn't give us anything in our spectroscopic methods. We find that the ones that have symmetry E are ramanactive and the T2 vibrations are IR and ramanactive. And so let's take a look at what this looks like in practice. So if we look at the IR spectrum of methane, we expect it to have a couple of a few vibrational modes. So T2 is degenerate. So here's our IR spectrum of methane. And so we can see that we get these peaks. You know, again, the group theory analysis just enables us to say that we're there. That they're there. It doesn't tell us anything about where they are, what the intensities are. And if we look at all this fine structure, those are the rotational transitions. So when we excite this molecule to an excited vibrational state, there's enough energy in there to excite all of the rotational transitions too. And so we see all of these rotational lines. And we're going to talk about that in more detail in a minute for diatomic molecules. Here's the raman spectrum of methane. And so, you know, notice that the intensities here are higher than the ones on this side. These are the stokes lines versus the anti-stokes lines. So this is the one where the electromagnetic radiation is giving up a quantum of energy to the molecule rather than taking it from the molecule. So we can rationalize the intensities that way. But otherwise, these things look kind of complicated. And the rotational fine spectrum of different modes are overlapping. And to really predict what this is going to do, you need a computer. You can do it. You can do it very accurately, especially for molecules that are relatively small like this. But if you really want to understand the details of what's going on, it's best to do it computationally. For smaller things like diatomic molecules, we really can do these calculations just with a pencil and paper. And so it's more useful to zoom in and take a closer look at that. Okay. So let's go back to thinking about what this looks like in a theoretical sense. So we're talking about motions of the molecule. For a diatomic, this is a lot easier to visualize because we just, all we have is just this molecule vibrating in and out. There's only one motion that's going on. So the spectra are a lot easier to interpret. And we can do things a lot more quantitatively without using computational methods. And again, I don't want to imply that the computational methods aren't very precise or that you can't get good answers out of them. You really can. You can't just, you just can't do it easily in class. So we're going to focus on this for purposes of calculating things that are quantitative. Okay. So things to notice about the harmonic oscillator formalism. So these are all things that you learned last quarter. So we have this system where there's a harmonic potential well. Of course, that's an approximation. We're treating it as a harmonic oscillator. That doesn't mean it's always really like that. But for small displacements, it works pretty well. Notice that there is a zero point energy. So the lowest state is not exactly at the bottom of the potential. That's important for the harmonic oscillator treatment. And again, remember it's different from the rigid rotor approximation. There we were allowed to have a zero rotational state. Here we're not. Remember that these things are quantized and we have energy increments of h nu as we go up in energy. And of course, we're looking at, you know, this is potential energy. So we can write it as kx squared. And it's only dependent on, you know, it's a one-dimensional system. It's only dependent on that internuclear separation. And x equals zero isn't zero. That doesn't mean the nuclei are touching each other. That's its equilibrium internuclear separation. And so when we measure bond lengths of molecules, of course, what we're measuring is the equilibrium distance or the average distance. In reality, these things are always moving around and vibrating back and forth. Okay, so we can use this to get quantitative information about things like the bond length and the force constant which tells us something about how stiff the bonds are. And we can calculate it quantitatively using some pretty simple approximations. And if you don't remember some of this from last quarter, it's useful to go back and review what the harmonic oscillator wave functions look like. So they're hermite polynomials. You should have seen this last quarter. It's useful to go look at them, particularly since next lecture, we're going to talk about selection rules and you might have to look at different symmetries of different vibrational states. Okay, so here's what that corresponds to when we actually look at the spectra. So here are the lowest energy two states of the harmonic oscillator. So remember our quantum number here is called new typically for the harmonic oscillator wave functions. So we have new equals zero and new equals one. And then all of these little transitions in between that are represented by the red and blue arrows are the rotational transitions. So remember vibrational transitions take a lot more energy than rotational ones. So when we excite the vibrational transitions, all the rotational ones come along for the ride and we see them in our spectra. And it turns out that's kind of useful because we can use them to get some valuable information. And so we're representing these states in Dirac notation as the state that has quantum number new for the vibrational state and quantum number J for the rotational state. And if we look at the IR spectrum, in this case it's HCl, we have two sides of the spectrum. So on this side we're going from zero J to one J minus one. So we're starting in the lowest energy vibrational state kicking it up to the first excited state. And while we're doing that we're going down in rotational transitions. On the other side we're going the other way. So we're going from zero to one in the vibrational transition and we are going up in the vibrational, sorry in the rotational state. So that needs to get fixed. So let's look at, I have that correct on the next one. So let's look at that. Another thing that I want to point out before we move on in IR spectra is that sometimes you'll see them plotted with the peaks pointing down, sometimes you'll see them with the peaks pointing up. It's just different conventions. It doesn't really matter. And you know that's something that you'll see in the literature. It's also important to pay attention to whether your spectrum is plotted in frequency units or in wave numbers because of course the energy, the direction of increasing energy is going to go the opposite way depending on frequency units or wavelength. The energy will go the opposite way. Okay, so these things also have historical names. So if we're going down in rotational states that's called the P branch and if we're going up in the rotational states that's called the R branch. These are just historical names. It's useful to know they exist for purposes of reading the literature but for the most part I'm mostly concerned that you understand the physical basis for this and what's going on. Again, here are some examples of spectra. Here's one in wave numbers. Here's one in frequency units. You will see both and you should definitely know how to deal with both. Like being able to fluently convert between frequency and wave numbers and energy units is definitely something that you should be able to do. Okay, so let's look a little bit closer at why these spectra look the way they do. So we have an R branch and a P branch and there's all this rotational fine structure in there and it's symmetric because on one side we're going down a quantum in rotational transitions and then we step through all the different transitions that there are. Remember a line doesn't represent a state. It represents a transition between two states. And if we're looking at, you know, we're going down a rotational transition and stepping through that on one side and going up on the other side but there's no peak in the middle. So the fundamental frequency that we're looking at here that's telling us about the energy of going from the zero to one vibrational state doesn't have a peak there. And the reason for that is the selection rules. So we have to have a difference in vibrational state of plus or minus one. So zero is not allowed. So we don't see a transition here. So when we want to get that fundamental frequency, we have to pick the point in between these two sets of vibrational lines. Okay, what else is there to say about that? Yeah, so the Q branch is the name for the central line which is missing. As we'll see next lecture, there are some times when you do get a peak there and we'll talk about what situations that arises in. And when it's there, it's called the Q branch. The other thing I want to mention again is look at the intensities here. So the lowest energy transition is not the one that has the highest intensity. There's some maximum up here. And again, that's because the degeneracy. When we get to a little bit higher energy states, there are more ways for the system to be in that state and so it's more highly populated. If we were to raise the temperature so that more energy is available, we would see that those curves would flatten out. We would get the higher energy states more populated and also just the whole thing would spread out because there's more degeneracy, there's more ways to occupy those states. Okay, so we've talked about what these spectra look like and how to interpret them in a really qualitative way. Let's get into actually calculating some things from them. Okay, so here's our spectrum of HCl and this particular one is plotted in frequency units. And we have this pair of lines in the center that's on either side of the fundamental transition from nu equals 0 to nu equals 1 without a change in rotational state. And so that frequency is what's going to tell us about the energy of the vibrational transition that we want. Another thing I didn't bring up before is if you look at this, now that it's blown up like that, you see that the lines are split. Does anybody know why that is? So we're, you know, we're talking about these bond vibrations. There's really only one vibration that it can undergo. So why would it have two different lines? It's like there's a very slightly different energy there in the bond vibrations. Yes? Is it one of the things that you're trying to do when it's just coming? That's a good guess, but let's see. Is it an isotope effect? Yeah, it's an isotope effect. So if we have, say, deuterium instead of hydrogen, there's going to be some, you know, little natural abundance population of different isotopes in the sample. And, you know, deuterium is going to be a lot heavier than hydrogen, and so we'll see, you know, a difference in the vibrational frequency. And a lot of times when we're interested in looking at vibrational states of molecules, one way that you can do that is isotopically label specific things in order to make the frequency a little bit different. If we have time to get into some applications next time, I'll show you some examples of that where people have been very clever about using isotope labeling to break the degeneracy and look at different vibrations in a complicated molecule. OK, so for now, let's stick to a not very complicated molecule because we can calculate a bunch of stuff about it quite easily. OK, so I like this picture because it's showing us on the energy level diagram. What we're seeing in the spectrum. OK, so we're looking at this transition from nu equals 0 to nu equals 1, and then we have the corresponding rotational transitions showing up here. So we have to take the average position of these two lines and get the center frequency, and that's going to tell us the fundamental frequency of that vibrational transition, which is what we want to know. And so we know that in general, the spacing between the lines for rotational spectra, same thing here, is 2B. It's two times the rotational constant. But so that's for the ones that are spaced apart evenly. Here there's a center one missing. So that center pair of lines is spaced apart by 4B. And so our change in frequency here called delta F is 4B over H, and that's what it is in Hertz. So let's put this in terms of what we're actually interested in finding out. So our kinetic energy is, you know, what we're describing with the potential. And that's 1 half i omega squared. And we can write this down in terms of the reduced mass. So here our moment of inertia can be written pretty easily in terms of the reduced mass, because it's a diatomic molecule. And, you know, again, as we've seen for, you know, a diatomic molecule like this where one of the atoms is really heavy and the other one is light, you've hopefully seen this in some of the homework problems, you end up with the vibration, you know, looking like the chlorine is just staying still and the little proton is bouncing in and out. OK, so we know the spacing between the rotational lines, which is the same for the fine structure in the vibrational spectrum as it was for the pure rotational spectrum that we looked at. And it's 4B for the central transition. And so we can just plug that in and solve for the change in energy between these states. And so, you know, we're interested in getting the change in energy, which is just equal to h times the frequency difference. So as with any kind of spectroscopy problem like this, e equals h nu, that energy difference for the transition is the same as the energy of the photon that went in and was absorbed to promote that molecule to the higher state. The only thing that's a little bit tricky here is that we have to remember what the states actually mean. So we're not able to see a direct line for the transition that we're interested in. We have to infer it from the structure of the rest of the rotational lines. Okay, so we have the energy there for, I didn't work out the number, but we know what it is. It's just h times that frequency. So we have the microwave value that was calculated for the bond length of HCl. So that means like somebody went and measured the direct rotational spectrum of this molecule and got the bond length, which again we know how to do. We looked at that. And that's given as 0.127 nanometers, which I looked up. So now we're going to calculate the bond length of HCl from the vibrational spectrum and see how well it agrees. Okay, so we have our mu r squared equals 2 h bar squared over the energy here. And if we plug in values and solve for r within the significant figures that we have here given the numbers that we got, we do get the same answer. So hopefully this convinces you that you can use this pretty simple analysis to get information about the molecules, at least in the case of diatomic molecules. And again, obviously this is a lot more useful for larger molecules that we don't already know everything about the motions of. And there we use the same procedure. It's just that you need computational methods to do it. Okay, so let's continue our discussion. We need to look at just how to deal with different units, just as a reminder. So again, here's the same spectrum. This time it's plotted with the peaks going down and it's in wave numbers. They're just different conventions and you'll see both in the literature. Spectroscopy is really the land of confusing notation and things being given in different units and drawn different ways. That's just because of historical conventions. So one issue is that chemists and physicists are looking at a lot of the same things and they have different conventions and how they present things. If I were the dictator of the world, it wouldn't be that way, but we don't get to pick how these things are represented. And so that's part of it, is just learning the conventions that you're going to see in different parts of the field. And particularly, you know, we're talking about, you know, maybe going to some of the PECEM seminars and seeing people's current research. That's part of the language and part of being able to understand what's going on there. So let's look at this. So if we want to calculate the bond length and you have the spectrum in wave numbers, you want to pick a pair of lines that's relatively close to the center. So why do we want to pick a pair of lines that's close to the center? We know that the rotational constant is the same and these lines are spaced by 2B in the rotational fine structure. But remember, we have centrifugal distortion. So when we get the molecule excited to higher and higher rotational states, then it doesn't behave as a perfect rigid rotor anymore. It's stretching more and more. And that's the same thing here. So if we pick lines corresponding to really high energy rotational states, we're not going to get the best value for our rotational constant there and there will be unnecessary error in the calculation of the bond length. So we want to pick something from a part of the spectrum where our assumptions are more likely to be correct. So pick a couple of them that are close to the center. And so that gives us B in wave numbers. So here it has a tilde over it. So it's in inverse centimeters rather than frequency units with the appropriate conversion. And then we can plug all our stuff in and we get something for I. And again, it's good to keep track of your units. So for our moment of inertia, we get kilogram meters squared, which we should. And then we can solve for R given that we know the expression for moment of inertia of a diatomic molecule. And then the last step is check that your answer is in the right units and that it's a reasonable order of magnitude. So we did get length units, which is good because we're talking about a bond length. And, you know, we got something that's on the order of angstroms, which is a reasonable value. So as far as calculating these things, paying attention to units and whether you got a reasonable order of magnitude is a large part of the battle that will really get you a long way in terms of figuring out whether your answer makes sense. Okay, so this is one type of information that we can get from vibrational spectroscopy. So we can learn about the lengths of bonds. We can also learn about the force constant of bonds. So the force constant is telling us something about how floppy or how stiff that bond is. So we're going with the harmonic oscillator approximation. We're just assuming that we have little springs in between our atoms and they're bouncing back and forth. And, of course, you can have a really soft spring or you could have a very stiff spring. And force constant of the molecule is what tells us about that. And so that has to do with the vibrational frequency. So notice that in order to get the bond length, we didn't actually use anything that had to be gotten from the vibrational spectrum. We're just using the fine structure of the rotational transitions, which we could have gotten from microwave spectroscopy. But microwave spectroscopy is not typically used very much because we can get all of this information from something like IR. Okay, so omega is our frequency in radians, which we have our center frequency that we measured off the spectrum in hertz and we have to convert it. And that equals the square root of K over the reduced mass. So what's K here? That's the spring constant. So just we're using Hooke's law from introductory physics since we're making this assumption that our molecule behaves like a simple spring. And that's the force constant that we're interested in getting. So if we plug in all our values, again, remembering to pay attention to units, one of the things that could cause somebody to slip up here is that you have your mass in atomic mass units, you know, when you calculate the reduced mass. And then when we have our force constant, that needs to be in kilograms because we'll end up with something in Newton meters. And so you need your conversion factor from kilograms to AMU. And so we get a force constant for HCl of almost 500 Newton meters. So just because I imagine people are going to be concerned about this, on the exam I will give you things like the conversion between kilograms and to AMU. So unit conversions and physical constants that you might need like that I'll give you. Equations I'm not going to give you because you have a cheat sheet. Okay, so we learned something about our force constant, the bond that tells us whether it's stiff or floppy. But since we've only talked about HCl so far, we haven't really put it in context. We don't know what that means. So let's just look at this for a few different diatomic molecules. So if we look at HF, here's its vibrational frequency in hertz and its force constant is 970 Newtons per meter. So we can see that HCl is a lot floppier than HF, which makes sense if we think about it, you know, again with our intuition from general chemistry. So we know that, you know, fluorine is a much smaller molecule. It has a smaller electron cloud. It's more electronegative. Those electrons are held more tightly to the nucleus. And so we expect it to have a stiffer bond when it's making a bond with hydrogen than HCl. If we keep going down, bromine is a little bit less different, but HBr is floppier still than HCl. And for HI we have a much more dramatic effect. So as we're going down the halogens, we see that the force constant in the bond is getting smaller corresponding to those electrons being held less tightly and making a longer bond. So looking in the other direction, you know, here we're comparing going down the halogens where everything is a single bond. If we look at carbon monoxide, so that's a triple bond between two atoms that are both relatively small, we get a very large force constant for that. So that CO triple bond is very stiff. And the nitrogen monoxide triple bond is a little bit less stiff. So again, knowing these exact numbers isn't all that important except that it gives you some insight into what we're talking about when we look at that force constant or relates it back to things that we know from general chemistry. Okay, so the last thing that I would like to do today is to talk about what happens when we have anharmonic potentials. So so far we've just assumed that we're looking at harmonic potentials. Everything is ideal. We can treat everything as a perfect harmonic oscillator. What happens if we can't? So if we have an anharmonic potential that looks like this and our harmonic approximation isn't perfect, that means that we have to have a correction term to our vibrational state. And so here's the equation for the Morse potential which is a commonly used potential. And we just end up having to add some correction terms to account for the fact that our potential is not a perfect harmonic oscillator. As far as what you need to be able to do with this right now, I just want you to know that it's there, that in some cases we are dealing with molecules that aren't going to behave as perfect harmonic oscillators. And in that case, there are things that we can do and there are corrections that can be made for the anharmonicity of that potential. And in computational chemistry, one of the important things in various problems is coming up with potentials that accurately represent the physical reality. So not just for vibrational states but for all kinds of processes that happen in atoms and molecules spectroscopically. If you can come up with a potential that describes, you know, what these, what the energy differences between these states look like, that's a large part of being able to solve the problems. Okay, so we're going to quit there for today. So you should be able to do a lot of the practice problems that are online because, you know, now we've gone through how to look at these things qualitatively. All right, we still have five minutes. Please pay attention for a second. So you should now have the information to do most of the practice problems that are online. Next time we're going to talk about how you calculate the selection rules and figure out whether particular transitions are going to happen or not. And hopefully if we have some time, we'll talk about applications. Anybody have any more questions right now? All right, I will see you on Wednesday.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 09. Molecular Structure & Statistical Mechanics -- Vibration in Molecules. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:00:08 Methane-Vibrations 0:11:56 Vibrational Modes 0:13:35 IR Spectrum of Methane 0:15:18 Harmonic Oscillator Energy Levels 0:18:24 Vibrational and Rotational Energy Levels 0:20:37 IR Spectrum of HCl 0:39:28 Force Constants 0:41:55 Anharmonic Potential
10.5446/18914 (DOI)
Good morning. Let's get started. It's time. It is. It's time for PCAM. So it's really cool that everyone comes to office hours. I like office hours. It's fun to get to interact with everyone on a more personal basis. I'm thinking about having more of them maybe on Thursday. So if you have an opinion about when it should be on Thursday, please post that on the Facebook page and I'll look at it and see if there's any sort of consensus. My schedule isn't completely flexible, but I will take people's preferences into account. So go ahead and if you have an opinion, please put it on the Facebook page. At this point, we're talking about rotational spectroscopy. We're going through the different kinds of excited states that molecules can be put into as a result of interacting with electromagnetic radiation. Last time we looked at this picture, that's the big picture of spectroscopy. Let me see if I can fix the screen because that's going to be annoying. Much better. Right now, we are away in the bottom of the ground electronic state and we're all the way in the bottom of that well in the ground vibrational state and we're just talking about exciting rotational transitions on their own. And last time we got around to talking about a rigid rotor in a plane, so like a linear diatomic molecule that's in a plane and it's rotating about the z axis. Now we need to talk about the more general case of something that is rotating on a sphere and it has more degrees of freedom. And again, you've seen this last quarter. This was described as a particle on a sphere and or the hydrogen atom wave functions, which of course are familiar from general chemistry also. So we can write down our Schrodinger equation for this and now in our kets we have two quantum numbers to keep track of. L and M sub L because those are the quantum numbers for the spherical harmonics and we remember what their values need to be. And if we write down the Hamiltonian in spherical coordinates with R fixed because of course we're talking about a molecule that we're assuming to be a rigid rotor so the atoms aren't vibrating around. Here's what we get for the Hamiltonian and that should look really familiar from last quarter. And I definitely recommend if it's a little rusty, go back and remind yourself how to convert things into spherical and cylindrical coordinates and check out the Hamiltonians for these things because I'm not going to go through and solve the Schrodinger equations for these systems. I'm assuming that you've already done it. We can just use the results. I am however writing them down in direct notation just so we can get used to making that transition between these things. Okay, so here's the results. We can write our Hamiltonian in terms of the angular momentum operator L squared and here's what we get and so from that we can pull out our energy eigenvalues. So things that should look familiar. This Hamiltonian, the solution to the Schrodinger equation in this kind of a system and these energy eigenvalues. I also want to briefly talk about commutators because the commutation relationships of the angular momentum operators are going to be important for things that we're doing. And so, you know, again, this is totally review from last quarter. If two operators commute then it doesn't matter what order you do them in. An example that I know you've all seen is position and momentum. And we talked about this last time that in terms of angular momentum the equivalent pair of complementary observables are angle and angular momentum. We can't know those two things with infinite precision. And so I'm sure this is familiar enough but now let's look at it in terms of the angular momentum operators. So angular momentum is going to come up over and over again in P chem. So this is kind of the most literal version of it. Well, we're actually talking about a molecule rotating around and we're looking at its rotational states. But it's worth spending a little bit of extra time thinking about angular momentum because we're also going to need to deal with it in terms of spin. And things like electrons, protons, C13 nuclei have this intrinsic property called spin that is kind of mysterious actually but it behaves in the same way as angular momentum. Mathematically we can treat it using the same formalism as something moving around. And so this is going to come up over and over again and it's worth looking at these things. Okay, so if we look at our angular momentum operators in the x, y, and z directions, here's how they're defined. And again this should be familiar from last quarter but maybe we're looking at it in a little bit different context. The main thing that I want to point out here is well first of all just remind you what they are. And also I want to point out that they don't commute with each other. And in fact they have a special commutation relation. You can prove this to yourself it's kind of tedious. But if you look at these, at the commutators of these angular momentum operators and work them out, here's what you get for their commutators. The commutator of Lx and Ly is I Lz. The one for Lz and Lx is I Ly. And we call that a cyclic commutation relationship. So we have this set of three operators and their commutators are related to each other in this cyclic way. Question in the back. What's the capital D again? The capital D is the partial derivative with respect to whatever the subscript is. So it's defined down here on the right. Just it's a useful shorthand for later on. We're not going to have so much space. Okay so that's just some, those are some useful properties of the angular momentum operators. I'm going to have you prove in the homework another one of their properties. But so now let's look at the actual spherical harmonic. So we've been talking about a particle on a sphere or the general case of a molecule that is free to move around in any way in space and we have to deal with its angular momentum about each of the three axes. So we have, you know, we can have rotation around z or y or x and we have to be able to deal with that. So okay they don't commute with each other. Let's look at the eigenfunctions of L squared. So L squared is the total angular momentum operator. This is a pretty fundamental property in quantum mechanics. And just to remind you here's what the spherical harmonics look like. And I see people writing stuff down. Don't, you don't want to sit here and try to draw all these things. You know these will be posted online. You can also Google spherical harmonics or hydrogen atom wave functions and you'll see there are lots of, you know, neat 3D representations of these that you can play with. But I want to remind you what they are and draw a connection to what the functions look like mathematically. Because one of the things that you're going to have to do in the practice problems is we're going to look at selection rules and we're going to say, okay can you have a transition from a particular state, a particular rotational state to another one and you're going to have to do that based on symmetry. Which means that you're going to have to take integrals with respect to these functions and say, alright do these things overlap by symmetry. And there are a few ways to do that. One is if you're really, really good at visualizing stuff in your head you can look at these things and imagine whether they overlap. Unless you're really great at drawing stuff really fast that's not going to work in the context of an exam or something like that. So you need to, you know, remind yourself about the symmetry properties of these things. And so it's important to know what the functions actually look like mathematically. So again we're going to represent our states, you know, you've seen them as y sub l m sub l. That just means they're described by these two quantum numbers. We're going to look at that in Dirac notation by just sticking those two quantum numbers that we have to keep track of in the, the ket. So the state zero zero looks like this. And then as we go through we can put in the rest of the Legendre polynomials. Again if you're frantically writing this down don't. You can, you can look it up. I just want to remind you that these, these familiar shapes from the hydrogen atom wave functions have mathematical forms that are easy to write down and we know what they are. And we can take integrals with respect to them and do things like figure out whether they overlap and get, derive the selection rules for different rotational states. So another thing that I want to mention here is that in quantum mechanics we call, you know, all of these states that we're talking about wave functions and you get really used to thinking about that in terms of an electronic wave function. So don't get confused about that. Here we're talking about different rotational states. Later we'll be talking about vibrational states. There are all kinds of different things that we have wave functions for. Okay so that brings us to practice problems which these are going to be posted. So what, so things that I would like you to be able to do, I want you to show that L squared commutes with LZ. So we already said that LX, LI and LZ don't commute with each other. You can prove that to yourself if you want. It takes a long time. But I do want you to show that L squared and LZ commute. And I would also like you to take this LZ operator that we have in Cartesian coordinates and convert it to spherical coordinates. Have you done this before? Is that something that came up last quarter? Okay. So you've seen it but haven't necessarily done it. Is that? Yeah, it's good practice. Should go ahead and do it. And there are also a bunch of extra practice problems. Many of them are from the book. Some of them are not. Some of them I made up. They're not posted yet but I'll do it as soon as I get back from lunch. So the practice problems that are going to be posted on the website, two things that you need to know about them. One is there's a lot of them. The other one is you don't have enough information to them all right now. So they are practice problems for rotational spectroscopy and vibrational spectroscopy. So if you don't know how to do all of them yet, don't worry you will. You'll see it as we go along in class or you can read ahead in the book if you want to. Somebody had a question in the middle. Yes? I'm not going to post the answers. However, if you ask your TAs nicely they'll probably help you with it in discussion. I will definitely help you with it in office hours if you want to come up with anything like that. But I'm not going to just post the key. Okay, so we talked about where rotational spectroscopy fits in in the grand scheme of spectroscopy. It's a very humble modest little spectroscopy. It doesn't take pretty much energy to do it. We talked about some properties of angular momentum which are going to be important for a lot of different things. Let's get into the details of rotational spectroscopy. Okay, so one of the things that we really need to know to get started learning about this is the rotational constant. So it's called B. It has a tilde over it which indicates that it's in strange units. And here's how it's defined. It's just h bar squared over 2 times moment of inertia for the particular molecule. Now, as we see when we look at the pictures of different molecules that have different shapes in the book, some things have more than one moment of inertia. And that has some implications for what their spectra look like. But this rotational constant tells you something fundamental about the molecule because the moment of inertia is in there and it comes up in the spectra. So let's talk about what that tilde means. That means that its units are in wave numbers. And in the context of rotational and vibrational spectroscopy, whenever you see something that has a tilde over it, that's what it means. It means it's in wave numbers. And part of the, I think one of the hardest things about learning spectroscopy in general is that it is the land of messed up units and sloppy notation. And we just have to deal with it if we want to read the literature. It's an old field. A lot of this stuff is historical. It's not necessarily consistent among different parts of it. And there are sort of different units. So, alright, so the wave number unit in and of itself isn't sloppy. That's just, it's defined as reciprocal centimeters. And why do we use it? Historically it's because, you know, we're talking about rotational and vibrational spectra. This is a unit that gives us reasonable values. You know, we don't have values of, you know, gigantic numbers. So a typical rotational constant for little molecules of the type that we're talking about is something like a tenth of a wave number to ten wave numbers. And for vibrational ones, it'll be, they'll be larger. So now why do I say it's sloppy? Well, when you get into the sloppy notation is when people start expressing energies in wave numbers. So that doesn't make sense, right? We have a reciprocal wavelength and people are referring to it as an energy. And you'll hear this. Like if you go to seminars, people say it, they're skipping some steps. So if we have something that's in wave numbers, we can get the frequency of that electromagnetic radiation. And we know that the frequency is related to the energy, you know, if you multiply it by Planck's constant. So there is a really straightforward relationship between this and an energy. And you'll, you'll see people use that as a shorthand. Okay, somebody had a question over here. Yeah. What is the C in that equation? Speed of light. How did you get from the middle portion there to that portion? Because the Hr is going away, it's, you know, kind of like the H, I stir or whatever. That shouldn't be there. Okay. Thanks for pointing that out. So, yeah, that's relevant when we start talking about the energy, but it shouldn't be in the rotational constant. All right. So if we're talking about a molecule that's free to rotate about three different things, now we need to consider different moments of inertia. So if we look at our classical rotational kinetic energy, we've got these three moments of inertia. They're labeled A, B, and C just to emphasize that it's out in free space. You know, we could call them XYZ, but however we wanted to find the coordinates, this is the general case. So this is a molecule this is the case of a molecule that doesn't have any symmetry. It has three separate moments of inertia. And so it's classical angular momentum around any one of these axes is related to its frequency. And so here's its overall energy. And what we're going to be dealing with is the quantum analog of the situation. And we're going to look at what that looks like for the cases of different shaped molecules. Again, we're not going to get into huge levels of detail about how you calculate the different moments of inertia for molecules of different shapes. That's a good thing to look up. Okay. So here's the general case where we have three different moments of inertia. And we're going to spend a little bit of time talking about simplified cases. So in the rigid rotor approximation, we're making the approximation that the bonds are rigid and they're not moving around. So the inter nuclear distance stays the same. And we also have to worry about the selection rules. So selection rules are just telling us by symmetry, which transitions can we observe in the spectrum. And the gross selection rule, sort of the, you can think of it as sort of the large scale course rule, is that a molecule can only have a pure rotational spectrum if it has a dipole moment. So let's think about why that is. So if we're talking about doing rotational spectroscopy, we have some electromagnetic radiation. It's exciting rotational transitions and, you know, it's interacting with the E field of that electromagnetic radiation. And so you're only going to see anything if it has a dipole. So the molecule's rotating around. And if there's no change in the electron density of the molecule as that happens, like say you have N2, as it's rotating around, there's nothing for you to observe. It's like a tree falling in the forest. You can't see it. It doesn't interact with that radiation. So yes? Does N equal MA plus NB? Yes, it does. So, you know, again, I'm not going to go into, you know, too much how you calculate these things. There's a really nice table in the book that shows you, you know, how to get the moments of inertia. We're mostly not going to focus on it. Okay. So we're only able to observe these transitions if the molecule actually has a permanent dipole moment. And of course, it's chemistry. So you can't have a rule without exceptions to the rule. We'll talk about what they are as we get further on. But the gross selection rule is you have to have a dipole moment. You have to have some change in electron density as the molecule is rotating around in order to observe it. And we get the transitions when the molecule absorbs a photon and it's at resonance. It's at the right energy to excite it to a higher rotational state. And then it changes from J initial to J final. And so here's how we write that down in a more formal way. So we have our transition dipole for that rotational transition. And we can write down its matrix element in this form where J sub i and J sub f are the initial and final states. And a formal way of expressing the gross selection rule is that that transition dipole has to be nonzero. And it turns out that the answer you get for what it has to be is that you can have delta J being zero. The molecule can just stay in the same state. Or it can be plus or minus one. And I'm not going to prove that to you at this point. I will for other types of spectroscopy later. For this one, we're just going to leave it at that. The derivation is in your book if you want to check it out. But for now, let's just use the result and if everybody understands how we're writing this down, I'm happy with that. Okay, so let's look at what the energy levels look like. So this is for a diatomic molecule. So it's really simple. So it's a diatomic molecule. We know that it has a dipole or we wouldn't be able to see anything. And here's what the spectrum looks like. So the notation is you'll see J plus one and J or J prime and J. And the arrow is going in the direction of the transition. So you'll see these things written down. Alright, so our rotational constant again is h bar squared over 2i. And the energy for a particular level J can also be expressed in terms of the rotational constant. So it's just that rotational constant times J times J plus one. Again, just from the eigenvalues of the LZ operator. Which, you know, again, we have the result here of what it is in polar coordinates, which you're going to show in your homework. Okay, so as a result of this, we see we have these equally spaced levels and the rotational spectrum has these lines that come in increments of 2b. And remember that whenever you see a line in the spectrum, that represents a transition. So we have the levels and then it's tempting to look at all those lines in the spectrum and think that those correspond to the levels. But remember that a spectral line is where you have a transition from one state to another. Okay, so if we look at the separation between adjacent lines, and this f with a tilde is your, you know, energy of a particular state in wave numbers. And we can write down our separation between adjacent lines and get relationship between that and the frequency. So this is a fancy way of saying that we can look at the spectrum and we know that the lines are spaced in increments of 2b. And from that, we can calculate the rotational constant and we can get the moment of inertia of the molecule. And so we can figure out, you know, something fundamental about the molecule from this kind of spectroscopy. Okay, so as spectroscopics method go, this one's a little lame. It doesn't actually contain that much information. I mean, so a lot of times you're going to know the moment of inertia of that molecule anyway or there are better ways to get it. This is not, you know, the most useful method in a lab setting. There are some situations where it is useful, which we're going to talk about a little later, but the main thing is in space. It's really cold out there and you don't have the luxury of aiming a giant laser at some galaxy and seeing what molecules are there. You have to deal with the ambient radiation. Okay, so before we talk about applications, let's just go through, you know, again, the notation is a little bit confusing. Let's just go through and recap what everything is. Okay, so E sub j is the energy of some rotational level j. And that's in normal good old energy units. F of j with a tilde over it is the energy of the level j in wave numbers. So again, we can convert readily between real energy units and wave numbers because we know the relationship there. And if we have the, if we have nu of j, of the transition j to j plus 1, so again, nu with a tilde over it is, is your spectral frequency, but it's in wave numbers. And that is for a particular transition j to j plus 1. And that corresponds to the position of the line that you see on the spectrum when something changes from j to j plus 1. And that sounds a little bit convoluted in terms of, you know, thinking about the energy level diagrams, but it's important because that's what we actually measure. If we take a rotational spectrum, that's what we're going to see. And so we have to, to know how to look at that and then back out all of this other stuff that tells us about the, the states. And then I have one more confusing notational issue to remind everyone about, which is that mu is the reduced mass and that's a constant. And there's also an operator called mu, which is the dipole moment. And of course that's an operator. How do you know the difference? Context. And if you get confused, please ask. So I know there are a lot of notational things that are confusing and hard to get used to. We just have to deal with them. It's an old field that's been around for a long time. It's something that we just have to learn to read. Okay, so let's talk about our rotational energy levels in a little bit more detail. So we're back to talking about a diatomic molecule. And these are things that we've already seen. Yeah, we've already talked about that, so we don't need to go into more detail about it. Okay, so what I want to point out now is that real molecules might not always follow the rigid rotor approximation. And that's something that we should be aware of. Okay, so I'm going to make this point by showing real data. So it's just a table of data. Don't write down all these numbers. But I think it really makes the point if you see what's going on. Okay, so for HCl, we can measure real numbers for these rotational transitions. And so I have some of the actual measured numbers for these states. So the radius that we measure for HCl, if we look at, you know, we can just look at the spacing between the states and get the rotational constant and measure these things. And if we do that for different transitions in the spectrum, here's what we get for the radius in nanometers. So if we take the one from, going from 3 to 4, that's the frequency it has. And here's the bond length that we get if we calculate it. And now if we take the higher energy states, the bond length starts to increase a little bit that we measure. And again, if we keep going, increases a little bit more and a little bit more. And as we go up to higher energy states, our bond is actually starting to stretch. So what's happening is we have our HCl molecule. We're putting some energy in and it's rotating around. And at low energies, the bond does stay rigid. But at higher energies, it's rotating faster and faster. And there's some centripetal distortion there. And we can compensate for that. There is a correction term for diatomic molecules. I'm not going to make you use it for anything in particular right now. I just want you to know that it is a very simple process. So it's important to be aware that a lot of times we're using approximations because that makes things easy to treat and we can understand the basics of how something works. But we should always know about the assumptions behind the approximations that we're making and understand when they're appropriate and when they're not. So if we're looking at really high energy states, this rigid rotor approximation might not be the best. It also depends on the particular chemical bond that you're looking at. So if it's a really rigid bond, if it's a very stiff kind of bond, then this isn't going to happen until much higher energy than it will for a floppier bond that can move around more. Okay, so let's talk about the types of rigid rotors. So again, there's a nice table in your book of what all the moments of inertia are. But I just want to talk conceptually about what they look like. Okay, so we have diatomic and other linear molecules and the moments of inertia are defined differently here. And in the case of diatomic molecules, we really only have one axis of rotation that we're worried about. So if we have a diatomic or linear molecule, you know, we're looking at, the z axis is here and we're talking about rotation in this plane. We don't have to worry about the larger picture of, you know, what's happening if it's rotating on a sphere in that case. And so the degeneracy of those states, G is the degeneracy of state J is just 2J plus 1. And so all that means is that as we go to higher and higher energy, the states get more degenerate. There are more ways to generate that state than if you're in low energy. And so if you have, you know, zero angular momentum about a particular axis, there's only one way to do that. But then as we add more energy, the higher states become more populated. And part of the reason for that is that they have higher degeneracy. Okay, so we can also look at an asymmetric molecule that has three different moments of inertia. And what that means is, so say for a water molecule, if I rotate it around the z axis or if I rotate it around x or y, each of those things is different. It doesn't have any symmetry in that sense. So, you know, in this case we're not talking about, you know, we've spent all this time talking about rotations as symmetry operations. Here we're not talking about it in that sense. We're just talking about, like, all right, there's a water molecule in the gas phase minding its own business and it can rotate about the x, y, and z axes. You know, in the x and y cases that's not a symmetry operation, but it's still doing that. And in terms of rotational spectroscopy, we have to worry about it. And remember, you know, on the picture here, the second molecule here is CO2. One of these things is not like the others. Remember that, you know, the gross selection rule is that you have to have a dipole moment to see the pure rotational spectrum. So I put it up here because it's an example of a linear molecule for this particular type of spectroscopy. We're not going to see a spectrum for it. Okay, so the other types of rigid rotors that we have are symmetric rotors and spherical rotors. And again, these names are a little confusing. So the symmetric rotor is something like ammonia where we have two different types of rotation that we have to worry about. So we can rotate it around the z axis. So that's around its principal axis of rotation. And then it has two other equal moments of inertia. And that's because it's symmetric in the sense that if we rotate it around x or we rotate it around y, those look the same. So that's the sense in which it's a symmetric rotor. It's not the same as the symmetry operations we talked about in the point group. And so one consequence of that is it has two different rotational constants. And they're called A and B. And they're defined as parallel and perpendicular. And parallel and perpendicular to what? The principal axis of rotation. And on the bottom is an example of part of the rotational spectrum of CF3I, which of course is a symmetric rotor. So there are lots of transitions going on there. Okay, so for the spherical rotor, all the moments of inertia are the same. And that's all that is meant by spherical. So something like methane, an icosahedral molecule like SF6, a buckyball, anything like that is going to be a spherical rotor. And we can simplify things by noting that all of these moments of inertia are the same. Okay, so let's look at this in the case of a symmetric rotor. And I just want to draw the parallel between the classical and quantum cases. So for the classical case, here's the angular momentum. We've got two moments of inertia that are equal. We're calling them B and C. And so that has to do with I perpendicular. And we've also got I parallel, which is the unique one that's about the principal axis. So we can write down the total angular momentum. We can write down the energy in terms of that. And then we can also look at this in the quantum case by just making the analogy that we know what the eigenvalues of the total angular momentum are. And we can relate it to the expression for the position of the lines in the spectrum. So here's what you're going to get in terms of where the lines show up in the spectrum with respect to the two rotational constants, A and B. All right, so other things that we need to think about, we have different rotational quantum numbers here because rotation is quantized around each axis that we're worried about. So we've got a rotation about the principal axis. And then we've also got these other two sets of rotational motion. And we have quantum numbers for all of those things. And so what that means is that if k equals 0, that means there's no rotation about the principal axis. So the molecule is in space and it's rotating purely around x or y or somewhere in between there. And if k equals plus or minus j, that means all the rotation is about the principal axis. So it's just rotating like this. So that's how you can think about the relationship between those quantum numbers. We're just talking about what direction is it quantized. And again, those are always quantized in increments of h bar. So it's written as h there. It should be h bar. And so for symmetric rotors, the specific selection rule is that we can have rotational transitions where k can, the change in k is 0. And we can have delta j being plus or minus 1. And then k also has to take these values up to an including plus and minus j question. It is h bar. Alright, so what that means is that for spherical rotors, we have a lot more degeneracy because there are more axes that are where things are quantized to be worried about. So in general, this rotor has a 2j plus 1 fold degeneracy because of its orientation in space. And it has another one with respect to its orientation in the molecular frame. So we've got an axis in the molecule. We've got an axis because of its orientation in space. And the degeneracy for this thing gets large really quickly. So if we have j equals 10, so we're only in the tenth state, there are 441 ways to get that. And this has some important consequences for what the spectra look like. So this is a simulated spectrum for FClO3. And it's at one kelvin. So it's really, really cold. This molecule is not rotating very much. So when it's really cold, you know, we're used to thinking about if we don't have much energy, everything must be piled in the ground state, right? Well, in these kinds of experiments, that's not true. And the reason for that is that the ground state is the lowest energy, sure, but there's only one way to get it. That state is non-degenerate. There are entropy considerations, if you will, to getting that. There's only one way to do it, so it's rare. Whereas the little bit higher energy states just have more ways to get that value. And so we see that the maximum population is not piled into the lowest energy state, even at pretty cold temperatures. If we look at something at more like room temperature, we see a couple of things. One is that the distribution is shifted and also it's broadened out a lot. So some really high energy states can be populated because there's a lot of degeneracy. There's a lot of different ways to get that. And this is the introduction to kind of the first part of stat mech, which we'll see at the very end of the class, but we'll try to bring in at least conceptual representations of this all along, because it's good to have a feel for how it works. So to use another analogy, it's like saying the most likely state for the first midterm is that everyone gets 100 because everybody's really smart. And that's true. That's the lowest energy state, right? But it's really, really unlikely because there's only one way to get everything right and there are lots of ways to make little mistakes. So those states are populated. All right. So the last thing I want to mention is an actual application of rotational spectroscopy. So I mentioned that this is the main one. It's really useful for looking at interstellar molecules. So here's a picture of this cloud of gas that has a bunch of molecules out in it that's out in space. And many of the molecules that are known to be out there were discovered near this feature. And how do you know what molecules are in space? So again, you can't shine a giant laser out there and do laser spectroscopy. You have to deal with the ambient radiation that's there. And it's really low energy. Space is cold. And so the way people do that is by measuring these spectra using a radio telescope. And then they make mixtures of molecules in their lab. So you get these spectra that are a big mass. There's a whole bunch of different rotational states. And then they can kind of guess based on pattern recognition and knowing what the spectra of different molecules look like and make up mixtures of molecules in the lab that can match the spectra. So this data is from the lab of Professor Lucy Zerise who is at University of Arizona. I visited her lab a couple years ago. It's pretty interesting. So she has two things. She has these giant telescopes. Like she's in charge of one of these facilities in Hawaii where she can log in from her computer in Arizona and run these giant telescopes. And she gets spectra from space that have a bunch of rotational features of different molecules. And then in order to figure out what's there, she goes and makes mixtures of molecules that she thinks might match in vacuum chambers that are really cold in her lab and compares the two. And so there's a lot of effort there in, you know, first of all instrumentation as far as, you know, being able to measure these things. And also in data analysis because you have to do a lot of pattern recognition and sift through a lot of spectra and compare whether they're the same or not. So this is what the stuff is actually used for in real life. And here's the instrument that you need to do it. So it's kind of exotic. It doesn't come up much. It's neat, but it's not used all over the place. Next time we're going to talk about vibrational spectroscopy, which is used all the time in research labs, you know, and you've probably all used it yourself in the context of IR and maybe Raman as well. Okay, happy Martin Luther King Day on Monday, and I'll see you everybody on Wednesday.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 06. Molecular Structure & Statistical Mechanics -- Rotational Spectroscopy -- Part 2. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:01:53 Angular Momentum 0:08:13 Spherical Harmonics 0:13:25 Rotational Spectroscopy 0:26:00 Energies and Frequencies 0:31:55 Types of Rigid Motors 0:36:28 Symmetric Rotor 0:39:40 Degeneracy
10.5446/18913 (DOI)
Good morning. Today we have various things that we need to take care of, finishing up our discussion of symmetry and bonding. We're mostly done talking about it, but we do need to talk about how we figure out if various integrals go to zero in a particular space or in a particular point group. And then we're going to talk about some terminology that we need to get into rotational spectroscopy and maybe we'll get to talking about quantization of angular momentum. So I hope to be done with rotational spectroscopy by the end of Friday's lecture so that we can start vibrational spectroscopy next week. Let's see how that goes. Okay, so just to follow up on the discussion last time, we were looking at these bonding examples and seeing which orbitals can form sigma bonds and pi bonds. And I asked you to go through and reduce our reducible representations that we came up with in class and make sure that you get the right answer. And I just want to follow up on that to make sure everyone gets it. Okay, so here was what we got. So we had our representation of the pi bonds. We know that they have to be perpendicular to the sigma bonds, which was the only condition that we started with. And so we had to consider the in-plane set and the out-of-plane set. And by making our reducible representations based on the symmetry of those objects and reducing them, we came up with the following combinations of irreducible representations. So for the out-of-plane set, we came up with A double prime and E double prime. And for the in-plane set, we came up with A2 prime and E prime. And if we go and look at the character table and look at what objects belong to these particular irreducible representations, we see that we've got a PZ orbital belonging to the out-of-plane set. The in-plane set has one symmetry species that doesn't correspond to any orbital. And then the other one contains the X and Y orbitals and then also some D orbitals. So what we know from this is that PX and PY wouldn't make any sense because they're already involved in the sigma bond. We know that the PY bond has to be perpendicular. And we also know that D orbitals are not realistically going to be involved in a nitrogen compound. That's what we were talking about. So they're energetically not available. And so we know that the PZ orbital on that central nitrogen and also the oxygens is what's involved in the PY bond, which again is something that we already knew. So it's useful to do these bonding examples when we're learning these things for a couple of reasons. One is because you don't need me to make up practice examples. You can go through sort of all the random Lewis structure examples from general chemistry and do problems like this. There's really a limited number of point groups that have few enough symmetry species that you could realistically be expected to do this in class. So if you do a few of those and a few of the ones that are harder, you can really get a good handle on this and you can check your answer yourself because you know in the case of these bonding examples what you should get and it should be consistent with your chemical knowledge. Okay, so we also had a bonding example in the homework where we wanted to do the same kind of analysis for oxalate. And I'm not going to go through the whole thing. I just wanted to give you a couple hints in case you're having trouble getting started. So first of all, here's your basis. You can draw little arrows representing the carbon-oxygen bonds. And why did I not represent any single and double bonds here? Anybody know? I know you do because people talked about it in office hours. Are they all the same? Yes, right? It has resonance structures. And so we need to make sure to take that into account. So we assign it to a point group. We know our basis. There are four things in our basis and so we know that the character of the identity matrix is going to be zero. And then if we do the, let's just do one example of setting up a matrix. So if we do a C2 rotation in the Y direction, so remember the way this is set up, the molecules in the XY plane and Z is coming out at you. So if we rotate about the Y axis, here's what we get. We're just flipping it over this way. And we need to come up with the matrix that gives us that when we multiply it by the original vector. So A1 switches places with A2 and A3 switches places with A4. So there's the matrix that we get for that. And the character for that operation is zero. So this is the kind of stuff that you should be able to do. And it's useful to practice. You know, when you come to office hours, we can do more examples like this. Again, you can make them up yourself and do them and make sure that you get it. Okay, so that concludes our discussion of bonding. I'm not going to go the rest of the way through this. I just wanted to give you a hint. Let's talk about it in a little bit more general way because we're going to need these sort of symmetry arguments when we start talking about spectroscopy. So spectroscopy is all about the interaction of light and matter. And we're going to have different kinds of states that the molecule can be in. First, we're going to talk about rotational states, but then vibrational states, electronic states, nuclear spin states. And we're going to have selection rules where we'll see that only certain transitions are allowed. And the reason that happens is because of these symmetry arguments. So we need to be able to do one more thing with these symmetry sorts of things. So one thing that I'm sure you're familiar with or at least have seen before in calculus and some other math classes is just the even odd rule. So if we have even functions, they're symmetric over a symmetric interval. Odd functions are anti-symmetric. So if you integrate an even function over a symmetric interval, you get some number of, if you integrate an odd function over a symmetric interval, you always get zero. And this is a nice thing to remember because it means that there are integrals that could look pretty nasty and you could just say they go to zero by inspection. Of course, if they don't, then you have to work it out. But that's a trick that mean professors like to pull to sear that into your frontal lobes, you know, put some really horrible looking thing on an exam and, you know, then if you remember this rule, you can see that it just goes to zero. We're going to look at how to use this in a little bit more general way with symmetry groups. So just here's some more examples. If we want to multiply these functions together, if you multiply two even or two odd functions, you get an even function. If you multiply an even times an odd, you get an odd function. We can also look at how their derivatives work. If you take the first derivative of an even function, you get an odd function and vice versa. And again, if we go back to our general chemistry intuition, this is something that we all understand at a really intuitive level. So if we talk about molecular orbitals, let's just do a really simple case. H2. If we have our two S orbitals in phase, they add constructively and we get a bonding molecular orbital. And if they're out of phase, they interfere destructively. There's a node in the middle and we get an anti-bonding molecular orbital. And again, this is something that we all remember. So we can look at this in a little bit more systematic way. We can talk about, you know, if we have an S orbital and a PX orbital, you know, assuming that we get it oriented the right way, there's non-zero overlap. But if we look at something like an S orbital with a PZ orbital in this coordinate system, in this orientation, we have a situation where lobes of opposite signs cancel. And if we have an integral like this where we have the product of two functions and we're integrating it over some symmetric interval here, all space, we can find the symmetry species of each function in whatever point group we're in. Then we want to multiply them together and that will give us a reducible representation. And when we reduce it, we have to look at it and see if it contains an A1. And if it doesn't, then there's no overlap. So let's think about what that means. A1, remember, is the symmetry species that is invariant with respect to all transformations. It has a character of one under every operation. And what that means is that if I have, say, a chemical bond and my orbitals overlap, that has to be invariant to all operations. If that wasn't true, my chemical bond would appear and disappear when I rotate the molecule, for example, and that wouldn't make any sense. So that's how you can understand how this works. So let's look at some pretty simple examples. So if we look at a molecule like ammonia and if we want to know whether there's overlap of the S orbital on the nitrogen with this particular linear combination of S orbitals on the three hydrogens, we can do that the same way. So again, this doesn't tell you that whether that's the only thing going on. Obviously, we know it's not. There are P orbitals involved also. We just want to guess or no answer. Do these things overlap? Another thing I want to point out is that just because you can make particular linear combinations of orbitals doesn't mean that that's the ground state or that that's necessarily what's going on in a given system. This is important because particularly when we look at electronic spectroscopy, we are going to see excited states and things that look pretty weird, but we have to worry about them because we're putting in energy and kicking the electrons up there. So again, this is just telling us it's powerful, but it's limited in what it can give us. We get, I guess, or no answer as to whether these things have any overlap and that's about it, but it is useful. Okay, so we just go through and do this by inspection. So F1 is our S orbital on the nitrogen and it's a sphere, so it's going to be invariant under all these transformations. Then if we look at the linear combination that we have of the three hydrogen orbitals all in phase with each other, we have to actually look at that and see what it does. Of course, we know it's invariant under the identity because everything is. For C3, we do get something that looks the same. It doesn't change sign. And for Sigma V, we get something that looks the same. And then if we multiply all these characters together, we do get something that looks like A1 and so it has overlap. So we know that there can be some interaction between these, this S orbital on the nitrogen and this particular linear combination. So now let's look at another one that has a different sort of symmetry. So we've got this S orbital on the nitrogen again and now we have a linear combination that consists of two of the three S orbitals of the hydrogens out of phase with each other. So again, does this look like a realistic orbital that we usually talk about in terms of bonding? Not really, but we can make linear combinations like this and when we get into excited states, we will see some things that look kind of weird. Okay, so again, we know what happens with the S orbital. It's invariant under all transformations. The second linear combination, it has a character of 2 for the identity and then for C3, we get an overall character of minus 1 and for Sigma V, we get an overall character of 0 and then if we multiply these together and reduce it, we get the symmetry species of E and there's no A1 in that so there's no overlap. And I think we're going to come back and talk about this later. I just want to introduce it so that we get a feel for how it works and when we start talking about selection rules, we're going to talk about it a little bit more. So for now, the most important thing to remember is the even odd rule and also how you go through this general procedure. Question? First, going back to your previous slide, where you have F3 with three different hydrogen orbitals, S orbitals, aren't they all different in that case for C3B under F3, should the identity operation equal 3? Yeah, I took a shortcut and ended up reducing this. Why don't I write up a little description of this with more steps in it and post it for you guys. That might be a useful thing to do. You're right, I did skip some steps. Okay, so now we're moving on and we're going to get to direct notation. Have you seen direct notation before? Is this something that came up last quarter? Okay, it's something that's really important to be able to read the literature in quantum mechanics. A lot of things are written down this way and it takes a little bit of getting used to it first but it's just a short hand notation for writing down wave functions and writing down integrals and it just saves a lot of time. So we're going to see a lot of cases where we have a lot of complex integrals, we're integrating wave functions together and everybody knows what the function is, we don't need to keep writing it down over and over again, this is just an easy way to write it. It is a very compact notation and it contains a lot of information and you need to know what's going on in order to be able to use it. So we have to be careful not to make mistakes and make sure that everybody knows what's in there but once you get the hang of it, it's really useful and saves a lot of time. Okay, so if we have a normalization and I know that you have seen this and you know what that's about, that's just an integral of the complex conjugate of one wave function with some other wave function overall space and we know what you get here. So that equals 1 if n prime equals n so if your two wave functions are the same and it equals 0 otherwise and that's just a consequence of the fact that these things are orthonormal sets. Okay, so in Dirac notation, this is how we write down that integral. It's just a shorter way to do it. So this funny little front half of the bracket thing is called a bra and the other one is called a ket and so Dirac notation is also called bra ket notation and if you just have a ket, yes, let's all have a middle school moment and giggle at the bra, that's fine. I can see people trying to hold it back. There's no point, we might as well just give in. So when you see a bra by itself, that's just the complex conjugate of a wave function, that's all it means. The ket by itself is a wave function. When you see them together like this, that means take the integral of that overall space. So there is an integration implied in that operation and then we can also write down this condition for what you get in a different way as the Kronecker delta function. Did you see this last quarter? Not really? Okay. You did because everybody looked happy and looked like it was familiar when I talked about the normalization condition where it equals 1 if the wave functions are the same and 0 if they're different. That's all this is. The Kronecker delta function is just a compact way of writing that down. So when you see delta sub nn prime, that's a function. It's a function kind of, it's just telling you that that equals 1 if they're the same and 0 if they're not. Okay, so now we can go through and talk about how to do some other things. So here's how we write down our normalization. And I'm saying it's a shorthand notation and it saves space. It doesn't look like it saves that much space right now but imagine that we have to put in what the actual wave functions are and it's a big mess. Whereas if we can just, if we use the Bragg ket notation as long as we're in a specified system we know what the eigen functions are, we can just specify them with their quantum numbers for example. Okay, so let's look at a matrix element. So I want to write down a matrix element of some operator which I'm calling omega. So we know that quantum mechanical operators are linear operators, we can represent them with matrices. And the matrix element nm for omega is just this integral. And here's how we go about writing that down. And so if we wanted to make a whole matrix for our representation of omega we would just have to go through and set up all of our matrix elements. Okay, does anybody have any questions about this? It's really important and it's going to keep coming up over and over again and I'm not going to write out all the wave functions every time so you're definitely going to see it. Yes? Yes, so what exactly shows that it's integral? Like the bracket means it's integral and it's inside the other. Okay, so if you just have like, so n prime in the bracket that it's in, that's a ket. That's just the wave function. The other one that's the bra, that's complex conjugate of n. So there's a lot in here. So the n and n prime are the quantum numbers that represent that particular wave function. So you have to know what system you're in. So if you say it's a particle in the box then we're talking about the particle in the box wave functions and you have to know what the one for n equals 0 or n equals 1 or n equals 2 is. And you have to already know that the system that you're working in and what the wave functions are have to be specified. But once you know that this is a shorthand way of writing them down. Then when you get the bra and the ket together like that, that's when it implies an integration over all space. Yeah, my question is what's the line in the middle mean? Like does that make sense? Does the line in the middle mean? There's, so there's a line in the middle. Okay, so if you look at the picture in the bottom right of the matrix element. So the m in that bracket is a ket and then the other thing is a bra. And so imagine, you know, take the omega out of the middle. You're putting them together. The, you know, line that's on one part of the bra and the one that's on the other part of the ket are superimposed on top of each other. That's all it is. You're just, you're just sandwiching them next to each other. And then when we put the operator in between, that means that we operate omega on m first. Remember the order of operations for these things. We operate the omega on m and then we take the integral of whatever the result of that is with the complex conjugate of n over all space. Does that help? Yes. Any more questions? If you don't understand, please do ask now because it's going to, it's going to come up a lot. Yes. So you said that when you put them together in the bracket, the patient is over all space like negative efficiency. Well, it depends on the context of what you're doing, right? So again, we have to know, we have to know what the system is. So if you have a harmonic oscillator that's in a particular potential, it's over the space of, of that system. And, you know, this is really powerful because it's very general. When we start to talk about NMR spectroscopy, which we're going to, then we'll be talking about things in spin space. It's not even in space, it's not even in real physical space. It'll be in, in spin space, which, which we'll learn about later. So it's extremely general and this is used for writing down all kinds of things in physics and chemistry. And, you know, again, it's, it's not, it's, it's just notation, but you will see it all over the place if you want to go read the literature in quantum chemistry or physics and it is going to come up a lot. So your book mostly doesn't use it. There's a little section somewhere in the early chapters on how to do it as kind of an aside. It's, it's probably useful to go look it up and, and read it if, if you want some extra clarification on it. The Wikipedia page on this is really good. I also recommend looking at that. But for the most part, your book doesn't use it. They just write out all the integrals. So there will have to be some translating back and forth because I'm going to use it in class. You'll see it in the literature and your book mostly does not do it. Okay. So let's move on. Oh, sorry. Any more questions? Can you use a conical delta notation or a omega? So in the, in that case, there's, there was no omega. That was just, I integrated the complex conjugate of n with n prime over all space and the answer that I got is the chronicle delta, which just means one if they're the same, zero if they're different. I did two different things with the notation. The first thing I did was I just set up a normalization condition. Then the second thing was an example of a matrix element for omega. They're, they're separate issues. Make sense? Okay. We'll see lots more of this. I just want to introduce it at the beginning. Okay. So now we're ready to really start talking about spectroscopy. And what, what we're going to do here is we're just going to go through the different types of spectroscopy that, that there are that we can use to solve chemical and, and physical problems. And I wanted to put up this kind of big picture view of what spectroscopy is all about. So in the most general sense, it's about the interaction of electromagnetic radiation with atoms and molecules. And we can use this interaction to probe all sorts of properties that we want to learn about the, the molecules. And we're going to go roughly in general, in, in order of things that go from lowest energy to highest energy. And so this is an energy level diagram showing, you know, not really to scale, but hopefully it gives you the idea how much energy it takes to do different things with a molecule. So we have the electronic transitions. So we have these two potentials here for the electronic transitions. And of course the, the ground state is the bottom one. And this blue arrow is showing the system absorbing a photon and jumping up to the next excited state. So that's what we're talking about when we talk about fluorescence or, you know, absorption spectroscopy when you do that in the lab and measure the optical density of something, you know, we're, we're talking about, you know, just absorption here. It takes a lot of energy to perform these electronic transitions. So this usually happens in either the visible region or in the UV. And if we don't have enough energy to excite that, we can excite vibrational or rotational transitions. And all of these things tell us different things about the molecule. So I should back up a little bit and point out that I'm going to tell you the, the quantum numbers that belong to these different things. So for electronic states, the quantum number we're usually going to use is epsilon. And then if we look at vibrational transitions, so now we're confined to the ground state of the electronic transition because now we're just putting it in for rad radiation. We don't have enough energy to excite those electronic transitions. And that little red arrow from the ground state to the next excited state within that well is a vibrational transition. And that happens in the infrared. And the quantum number that we use for that is new. So don't get confused. It's not a V, it's not the frequency, you know, new gets used for a lot of things. But we have to pay attention to context. For rotational transitions, which are these little tiny ones in between the vibrational transitions, that's, those happen in the microwave. And we use the quantum number J. So again, this is a mixture of just terminology, you know, what are we going to call these things. And also looking at the big picture on a single energy scale to, and this will give us a feel hopefully for why we see certain things in certain kinds of spectra. So for instance, when we do vibrational spectroscopy, we're going to see a whole bunch of fine structure that comes about from the rotations. Because vibrations take a lot more energy to excite than rotations. So when we excite something to an excited vibrational state, we get a bunch of rotational excitation for free. Same thing when we excite electronic transitions, we're going to see fine structure due to the vibrational transitions. But it doesn't go the other way. If you're just putting in microwaves, you don't have enough energy to excite a vibrational transition. And so you don't see it. And so one aside here is, you know, your friends and relatives who tell you that a microwave oven works by exciting the vibrational frequencies of water, what do you think about that at this point? Not enough energy, right? So microwaves just excite rotational transitions and you need IR to excite those vibrational transitions. So your microwave works by having an electric field that's oscillating and moving the dipoles around. Okay, so as a general matter, when we go to record a spectrum that's going to tell us something about a molecule, we're going to sweep through a range of frequencies and measure the signal. So this is I of nu and this nu really is the frequency. And we're going to plot it as a function of frequency. So who here has actually taken a spectrum of a molecule yourself of any kind? Maybe almost everybody, right? Don't you do this in general chemistry? So have you taken an absorption spectrum? Raise your hand if you have beer's law. Okay, good. How about IR? Do you do that? Yeah. NMR? Have you done that yourself? Okay, good. So you have some experience with all of these things. So one of the things that you probably know is that at least in NMR and fancy IR spectroscopy, this description of you vary the frequency and sweep through and see the response isn't 100% correct. There's other, you can do it that way but it's not the only way to do it. And we will talk about what happens in these modern instruments. But okay, so here's another view of the same kind of thing. What energy range does stuff happen in? So here's the electromagnetic spectrum. And it turns out people are pretty clever at making use of electromagnetic radiation and its interaction with molecules. We can use just about every part of the spectrum to learn something about things that we're interested in. So if we talk about radio frequency, that is the resonant frequency of nuclear spins. So NMR spectroscopy is down here. We will talk about it later. Then once we get into the microwave, that's where we excite rotational transitions. IR is where we look at vibrations. UV-Viz spectroscopy is where we can excite transitions of the valence electrons. So this is what we're usually looking at in the little spectrophotometers in molecular biology labs. And when we investigate Beer's law, that's what's going on here. We're looking at transitions of valence electrons. If you want to look at core electrons and for instance find out what atoms are present on a surface, then you need x-rays. So x-ray photoelectron spectroscopy can be used for that. So we need higher energy to excite those more tightly bound core electrons. This starts to get a little bit exotic. You need a big x-ray source. This is something that you would do at maybe a synchrotron, you know, a source that's at a big national lab. It's not an instrument that people would typically have sitting around in a lab. These other things are. And then the last one, the gamma rays, this also sounds pretty exotic, right? This is, we can actually look at excitations of nuclear states. It's called muskvower spectroscopy and that's done with gamma rays. So we will go through many of these types of spectroscopy, not all. We're not going to talk about XPS or muskvower spectroscopy so much but the others we will go through and give you a feel for how it works. Some of these things are things that you're very likely to use in your research. In order to talk about the mechanics of how spectroscopy works and start really getting into it, we need to talk about the Born-Oppenheimer approximation. And I know that you've seen this last quarter but let's just review it quickly because, you know, maybe it'll be put in a little bit more practical context. So the Born-Oppenheimer approximation just says that the electrons move around a lot faster than the dukely-i and that means that we can separate them which is really good because we would have very ugly problems if that didn't work. And of course it's not a good approximation in all cases but for many of the things that we want to do as chemists, it is a good approximation. And so what that tells us is that the, you know, we have some overall wave function for the molecule and it involves the motions of the electrons and the nuclei and we can separate variables and treat them separately. Just because the electrons are tiny, they're moving around really fast, the nuclei are big and heavy, it takes them longer to catch up. So the electronic wave function does depend on the positions of the nuclei which we know. We've been talking about molecular orbitals and things like that, you know, where the electrons are around in bonds. So it's not that it has nothing to do with the nuclei at all. The positions are important but their motion isn't really important on the time scale of the electrons under this approximation which is usually pretty good. So we can consider that the nuclei are just sitting still on the time scale where we're worried about the electrons. And so here's our Schrodinger equation for the electrons. So we've got our Hamiltonian and our wave function and now notice these have a subscript of epsilon to indicate that we're talking about the electronic states. That's its quantum number. And this is a function of the electron coordinates and the nuclear coordinates but the nuclear coordinates we're going to be treating as fixed. And then when we go to talk about the nuclear motion and this is rotation and vibration, that just sees an overall smeared out potential from the motion of the electrons. It sees the average of what the electrons are doing. And so here we have our Hamiltonian for the nuclear transitions and it's got subscripts of nu and j. Remember those are our quantum numbers for vibrations and rotations and those just depend on the nuclear coordinates because it's just seeing some overall smeared out potential from the electrons. So again this is a really useful approximation because it means that for the most part we can treat our electronic spectroscopy as being separate from rotations and vibrations under many conditions. Okay so that is it for the kind of basics and housekeeping kind of stuff and review. Let's move on to actually talking about quantization of rotation, angular momentum, things like that that we need to know for rotational spectroscopy. So I guess it's not entirely true that we're done with review. We are going to talk about some things that you learned last quarter but give it a little bit of a different spin if you like. So please review chapter three if you don't remember it really well. And I really recommend reviewing all of this stuff from last quarter as we talk about it just because quantum is one of these things where at least I found when I was learning it as a student it's not very intuitive and you really have to you know when you get more information about how it works you really have to go back and review the basics and make sure that you understand it and it makes more sense every time you go back and do that. So please do review it. So one of the things that came up last quarter is the case of a particle on a ring and you might wonder why you're interested in a particle on a ring and that's a fascinating question. One thing about basic quantum mechanics is that you can end up with a lot of these examples that don't sound very practical so you look at a particle in a box and a particle on a ring and a particle on a sphere and a harmonic oscillator. Is that right? Is that what you did? So why do you think that we pick these particular things? Is it because they're extremely realistic and they describe everything in chemistry and physics? Yeah it's because those are the only things that you can solve analytically like anything more complicated than that you need computational methods. There are lots of computational methods. There's a huge field of computational chemistry where people do electronic structure calculations. There's a big center for that at UCI. But these simple cases where you do get things that you can solve analytically do give us some intuition about processes that we care about. So if we think about a particle on a ring in and of itself that's not necessarily the most exciting thing. You know you have some particle going around but you can also think about it as a rigid rotor. So if you picture a diatomic molecule that's just in the gas phase it's off by itself. It's not interacting with anything. So we have the diatomic molecule and it's rotating around. If you pick a point on that molecule and follow its rotation it looks like a particle on a ring, right? We can use the same mathematical treatment to talk about our rigid rotor. So what do I mean by a rigid rotor? That means that the bond between those two nuclei isn't flexing at all. It's not changing. It's not bending. It's just rotating around. As we'll see that's an approximation that works pretty well at low temperature. So if we're in low energy states and you know also it depends on the molecule if we have a very stiff bond that works better than if we have a very floppy bond. But it is in approximation that we can start out with. Okay so we can go through angular momentum and if we're talking about a particle on a ring we have something that's in cylindrical coordinates. That's sort of the natural coordinate system to use here because we have our diatomic molecule. We can say it's rotating about the z-axis in this plane so it makes sense to do this in cylindrical coordinates. And another thing to go review and look up the Wikipedia page if you're not really up on it is how to transform back and forth between Cartesian coordinates and cylindrical and spherical coordinates because that's something that we're going to need to know how to do. Okay so here's our angular momentum in the classical case. So we have our angular momentum about z is plus or minus the momentum times the radius. And we can get the moment of inertia here and we can calculate the moment of inertia for a diatomic molecule. In the book it goes into all the different kinds of rigid rotors that you can have that have different moments of inertia. And that's a useful thing to look at. It's good to go through it and understand how it works. It's not something we're going to spend a huge amount of time on in class because it boils down to a lot of crap to memorize. There are a lot of formulas and that's not really what we're about. We want to learn how to solve problems. So in class we're mostly going to focus on the diatomic case with the understanding that there's all this other stuff that you can do. We're going to look at the case where we can solve things analytically. Well you can for some of these other things too. Anyway it's worth going and looking at it just to make sure you understand how it works but we're not really going to focus on it in class. So if we take, I'm not going to go through and solve the Schrodinger equation for this for you. I assume you did that last quarter. But we'll just go through the argument for how you get this. So in the quantum version of this we can use an analogy to the De Broglie wavelength for the momentum here. And the fact that it has to be quantized comes from the fact that it has periodic boundary conditions. So it's a little bit strange but it makes sense when you think about it. So our wave function has to be single valued. So that means that if I have a wave function from my rotational state, if my molecule is rotating around, when it comes all the way around the circle and makes a complete circuit, so it goes 2 pi rotation, it has to come back to the same place. And that just intuitively makes sense, right? You can't have something discontinuous happen to it as it's going around in the circle. And the quantization comes out of that condition because an integral number of wavelengths has to fit in that circle. So here's how we would write that down. So some integral number of wavelengths has to go within that circumference for our little point on the molecule rotating around the ring. And so that's where we get the quantization from. Okay, so here's the quantum version of our angular momentum in the z direction. And what we get out of it is that it comes in increments of M sub L which is its quantum number times h bar. And M sub L can be 0 plus or minus 1 plus or minus 2, etc. Okay, so now we've described the z component of angular momentum. Let's look at this a little bit more. So the natural place to put this is in cylindrical coordinates. And of course, r is fixed because we're talking about a diatomic molecule rotating. And we said it's a rigid rotor, so its radius can't change, it can't vibrate. It's stuck at that particular radius. So r just becomes a constant. That part integrates out. And so our Hamiltonian is simplified. And so now here's how I write down that equation in direct notation. So again, if you don't remember what the wave functions for this look like, go look them up in chapter 3. But instead of calling them psi, I'm just going to put that M sub L in the ket. And that indicates that I'm talking about the wave function for that particular M sub L value. And so if I write out the Schrodinger equation substituting in for what the Hamiltonian actually is, here's what that looks like. And I also want to point out that angular momentum and the angle are complementary values. What do I mean by complementary values in this context? Do you know? Is that Ring of Bell from last quarter? Yeah? You can't measure them with the arbitrary angle. Exactly right. Yeah, they're complementary in the sense of the uncertainty principle. So it's like, it's an analog to position and momentum. You can't know the angular momentum and the angle. So they're complementary observables. And I bring this up because I wanted to point out that the, there is a zero angular momentum state and that's legal. And that would seem to violate the uncertainty principle, right? Because we know the angular momentum with absolute precision there. It's zero. And this is a little bit counterintuitive when we have, you know, for people who are used to thinking about quantum mechanics a little bit. Because if you think about vibrational energy levels or your harmonic oscillator potential, there's a zero point energy, right? If you never have zero energy for that. For a rotational state, you can. You can have zero angular momentum. The reason for that is that in that case where you know the, where you know that with infinite precision, you don't know anything about the angle. It's just out in space. You know nothing about it at all. So that's, that's why you're able to have a zero state for that. Okay, so now we've talked about the z component of angular momentum. We have something rotating on a, on a ring. Now what about if we have the particle free to rotate over a whole sphere? Did, did you do this last quarter or two? So you talked about the hydrogen atom wave functions. Again, everybody has seen those from general chemistry as well. So now we need to talk about the general angular momentum for a particle on a sphere. And I think we're not going to finish that this time. It'll be too rushed. So I'm just going to, to pick it up next time. Does anybody have any questions about what we did today? It's kind of jumping around between different topics, but there were some terminology things that we needed to clear up before moving on. All right, have a good day and I will see you Friday.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 05. Molecular Structure & Statistical Mechanics -- Rotational Spectroscopy -- Part 1. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:00:55 Which Orbitals can Form Pi Bonds 0:06:12 Symmetry Properties of Functions 0:08:33 H2 Molecular Orbitals 0:09:00 Vanishing Integrals 0:15:35 Dirac Notation 0:25:54 Big Picture: Spectroscopy 0:34:20 Born-Oppenheimer Approximation 0:38:21 Quantization of Rotation 0:45:39 Z-Component of Angular Momentum
10.5446/18910 (DOI)
Good morning everybody. Let's go ahead and get started. It's about that time. So if you didn't already get a character table, raise your hand because we have a bunch of them here. All right, looks like almost everybody did. In case you haven't met our TAs yet, this is Jerry. I think John Mark and Upaula you know from last quarter, yes? Okay. All right, so we have a lot to do today. We're going to do a few more examples of assigning molecules to point groups. So I hope everybody has your flow chart. Hopefully corrected. There's a corrected one posted online. There were a couple mistakes. So sorry about that. Please do check out the corrected version. Also, so you should have your flow chart and your character table that we're handing out in order to be able to go through the examples. And so hopefully everybody tried the practice problems on assigning things to point groups. And we're going to go through a few of those this morning. And then we're going to continue to talking about matrix representations. So somebody also asked me when the lectures are going to be posted online. And I actually don't know, but the person who does know is Sean who's doing the filming in the back. Can you tell us one that's going to be? Yeah, so well actually the first lecture is available online. You can search on YouTube. You can search like the first name or UCI open course word that's just on the department. You'll be able to find it in the professor's name or the class name. So I'll send a link to you and send it to everybody else. It's just one link that will consist of what you have to do with the latest lecture. So the first one is online right now. Great. Thanks. So in general, how long do you think it's going to take for, so after each lecture how long? It should be less than a day to a day. Great. Thank you very much. So that's where they'll be. All right. Any more questions about logistics, things like that? One thing I should mention is that my Wednesday office hours are three to four. I'm going to have that. A bunch of people showed up at office hours yesterday. We had a really good discussion so I encourage you to do that. The Tuesday office hours I think are going to be moved to 11 to 12 from now on to avoid conflicting with our discussions. So is there some class that everyone has to take Tuesday from 11 to 12 that I don't know about? Okay. Good. That sounds like a winner. Okay. So let's talk about some point group examples. Okay. Can everybody see okay? So unfortunately I don't have very much control over the lights. Our choices are like that and like that. So raise your hand if you like it better, darker. Raise your hand if you like it better, brighter. Okay. Darker it is. All right. So if we have a molecule like this and we want to assign it to a point group, what do we want to do? We need to get out our flow charts and look at does this thing belong to any of the special groups? So we have low symmetry, high symmetry and linear. So if we added one more substituent here, then that would be something like sulfur hexafluoride or xenon hexafluoride. Then that would be an octahedral molecule but we don't. We have this. So what do you think? I think I'm going to get somebody to volunteer to assign this thing to a point group. You just volunteered? Oh no. Okay. So just so everyone can see what it looks like here. So we've got this thing that has a square on the bottom and then it's got one substituent sticking up. And the thing that keeps it from being high symmetry is that it doesn't have another one on the bottom. So it's not high symmetry? So it's not high symmetry. Okay. So now what? Now we need to find the principal axis. Would it be C4? That's right. It has a C4 axis. Okay. So that's good. So now the next question is do we have some C2 axes that are perpendicular to that C4 axis? Some more? C2 axes. Can we do some 180 degree rotations perpendicular to the C4? I'm going to say no. Grab the model and pick it up. It'll be easier to see. Okay. So try to hold it under the camera so everybody can see. There you go. If that's C4. Yeah. So there's 90 degrees for the C4. All right. Well, so that's your C4 operation. But so perpendicular to that. But perpendicular to that. Can we? So perpendicular to that. It would be running through there. So if you rotate 180 degrees, it's not the same right? If you rotate 180 degrees, it's not the same like that. No. So what do you think? Everybody agree? Yeah. So there's no C2 axes. Okay. So now we know that it's got to be a C or S2N group. So what do you think? Does it have a horizontal plane perpendicular to that axis? Yes. No. So remember, here's our principal axis. I know there's one parallel. Yeah. There's two parallel. But what? Is that a dihedral? It's, well, it's vertical, right? Because it contains the principal axis. There's two planes. To be dihedral, it would also have to bisect some C2 axes and we don't have any C2 axes. So they're vertical planes. So you're. Do I have to know that like this? Well, you're on the right track. But the question, if we're following the flow chart, is just does it have a horizontal plane? Can you go ahead and ask? No. Okay. Sure. So the question is how do we know which symmetry planes are vertical and which ones are horizontal? And the definition of that in this context is that if your symmetry plane contains the principal axis, it's vertical. We also have dihedral planes. Those also contain the principal axis. Not every molecule has them. And they bisect the C2 axes that are perpendicular if there are any. In this case, there aren't. A horizontal plane would be one that cuts perpendicular to the principal axis. So what do you think? Does this molecule have one? No, right? Because if we flipped it over, then this substituent would be down instead of up and it wouldn't be the same thing. So is it C2V? So it is, you are very close. It would be C2V. C4, BC4V, right? Right. It would be C2V if it's principal axis was a C2 axis. Yeah, yeah, yeah. But instead it's a C4. So great job. Thanks for. I've never seen this stuff before. You did a good job. Yes. A dihedral plane is also, it's also, it's like a vertical plane in that it contains the principal axis. But it also bisects your C2 planes that are perpendicular to it. All right. We need to have one discussion going on and not many. I'm really happy to answer everybody's question, but we need to do them in series and not in parallel. Okay. So the question is what's a dihedral plane? So let's find a molecule that has some. I made a whole bunch of models, which is really nice except that I can't find anything. Okay. So here's benzene. So some rules for looking at symmetry. In general, we're going to say that, you know, resonance structures, when we're assigning things to a point group, we're going to assume that resonance structures are fluctuating back and forth so quickly that we can't see the individual structures, which of course is, that's why we have resonance structures anyway. It's an, you have an average of these bond lengths. So this molecule, which you can go through the point group, it belongs to the D6H group. So it belongs to one of the D groups because it has a horizontal plane. It has, its principal axis is a C6 axis. So benzene is D6H. And so when we were talking about just now, do we have C2 axes perpendicular to the principal axis? In this case, we do. We can flip it over all kinds of different ways. Oh, thank you. So we can flip it over, looks like three different ways perpendicular to that axis. And so then we also have some dihedral planes, which are vertical planes that bisect those C2 axes. And one thing that's really nice about the character tables is that some of these symmetry elements are really hard to visualize, or at least when you, when you go through and try to count all of them and make sure you didn't miss any. And the good news is that you don't really have to do that because once you assign the molecule to a point group, if you open up your point group table and look at the D6H group, first thing you notice is that benzene has a lot of symmetry operations. But it lists for you what they all are. And so you don't necessarily have to go through and find all of them yourself. Once you get it into a point group, then you can go back after the fact and check out what all the symmetry elements are. The character table gives you a lot of other information about the molecule. And we're going to talk about a lot of that today. But before we go on, I do want to talk a little bit more about assigning things to point groups because this is a really important skill that if you have a hard time with it, it's going to be challenging to keep up later on. So let's make sure that everyone gets it. And again, if you don't and you need more practice, stay after class. I'll be here answering questions. Come to office hours. Ask the TAs in discussion. You know, it is something that once you get it, you do. But it can take a little bit of practice. Okay. So let's look at this molecule. Can I have another victim? I mean volunteer. So that molecule has some interesting things going on. Okay. So here's your flow chart. So the first thing we want to know is does it, it's not linear. We can tell that is it low symmetry or high symmetry? Okay. This is a controversial molecule. Some people are saying it's high symmetry and other people are saying it's low symmetry. Okay. So I don't think it's high symmetry, right? Because it's not, it's not icosahedral and it's not tetrahedral. That would be like this. And octahedral, we saw a little while ago. So, well let's see if it's low symmetry. So to give examples of some of the low symmetry point groups, C1 is the one that doesn't have any symmetry elements. That's like that. Do you think it's like that? Does it have no symmetry elements? I can C1, right? It looks like it has an inversion center. If I turn it inside out, this carbonyl would go over here and this CH2 would go over here and this methyl group would go over there. So another assumption that we make, you know, I said that we assume that resonance structures are their average structure. We also assume that there's free rotation about single bonds. Those methyl groups are just spinning around. If we're going to talk about it in terms where that's not going to happen, I'll tell you that the molecule is really, really cold so it's not moving. Otherwise, we're going to assume that it does. Okay, so it's not low symmetry or at least it's not C1. So what about something like CS? That's where it just has the identity and a mirror plane. We already know it's not that because we said it has an inversion center. One of the other low symmetry groups is CI which means it only has an inversion center. So what do you think? Does that thing have anything going on other than its inversion center? So if we cut it like this, we would have this carbonyl over here and that one over there and that wouldn't be the same, right? Is there any way we can rotate it? What do you think? Yeah, I think she's right. It doesn't have anything else going on. So that is CI. Thank you. Okay, so that's, those are some examples of, you know, now you see what the low symmetry groups look like. And I think we're going to stop there for examples, although, you know, I'm happy to do huge numbers more if you come to office hours or stay after class or things like that. I guess another thing that I want to point out is that until you get good at doing it really fast and just looking at them, it's best to go through the flow chart and assign them. So one thing that people get confused about is looking at the symmetry operations versus the names of the point groups. So for instance, I noticed that one mistake that people make sometimes is looking at something and saying it has an improper rotation axis. And so then they think it has to belong to one of the S2N groups. And not necessarily, lots of things can have an improper rotation axis without belonging to those groups. So, you know, it's good if you just go through systematically and look at the flow chart. Okay, so I think that's it for playing with the Tinker toys today. Let's go on and do some other things. So I'm going to switch back to PowerPoint. Yes? I'm sorry, can you speak up please? Sure. So an improper rotation, again, is when you rotate by 360 degrees over N and then reflect through a plane that's perpendicular to that axis. So if it were an S3 axis, we would rotate by 120 degrees and then reflect. Can you show us the both of them exactly? I can show that, sure. So, all right, we still have that up there. So if I were going to do an improper rotation, I would rotate by a third of a turn and then reflect. You know, flip it. So it's, so rotate by a third of a turn and then reflect through this plane. So see what I mean? I can't quite do it to the model, but that's what it is. Okay, so let's, one more question. When you're looking for a point group of something like ethane, would you do it for staggered or at this point? That's a good question. So the question is for ethane, if you're looking for the point group, would it be staggered or eclipsed? So remember we said that by default, if I don't tell you anything about it, we're going to assume that there's free rotation about single bonds. So you can just assume that those methyl groups are rotating. If I wanted you to do it for staggered or eclipsed ethane, I would have to tell you that specifically. Otherwise you wouldn't know. And, you know, of course those configurations do exist at low temperatures. It's just, you know, otherwise we're assuming things like methyl groups are just rotating around all the time. So that's pretty much equal to I. In that case, I mean, yeah, you just treat it as one big substituent. You know, it's just a, the methyl group is just freely rotating. You know, that said, we might see problems where we say that something is staggered or eclipsed and you just have to pay attention to the description of the molecule. Okay, so now let's talk about all the information that you get in the character table. So, so far we've done examples where we look at how to put things into a particular point group. And that leaves aside the question of why do we want to do this? So, the reason we want to do this is that once we do, we get all kinds of information about the molecule for free. Somebody already collected it and put it in this character table and we can use it. Okay, I would really like this to show my slides now. Okay, good, there we go. All right, so there's our flow chart. Okay, so now let's talk about the character table. So, everybody has this in front of them. Let's look at the information that it gives you. Okay, so I have put some examples here on this sheet just to show some. And we're going to be using this a lot. So, please bring it to class to follow along with the discussion. And these are the same character tables that you'll be given on the exam. So, okay, so if we have the C2V character table, so that's a familiar molecule that belongs to that point group is water just to visualize it. And let's look at the information that we have here. So, in the top left, we have the name of the point group. And then going along the top, we have the names of the symmetry operations that belong to that point group. So, E is the identity, so that's do nothing. C2 is 180 degree rotation. And then we have these two planes, sigma V XZ and sigma V prime YZ. So, they're called sigma and sigma prime just to distinguish that they're not equivalent to each other. Because if we have a water molecule, this is a little bit hard to see because it's small, but, you know, everybody knows what water looks like, so it should be okay. We have our two planes. One is we can slice through the molecule like this so that one hydrogen ends up on either side. And the other one is we can cut through the whole thing so that we're slicing through the oxygen-oxygen-hydrogen bonds. And those two planes are not equivalent to each other, so that's why they get separate entries in the table. And then the next question is what are the XY, how are the XY and Z axes defined? The principal axis is always the Z axis, and then you just use the right hand rule. Okay, so that tells us the total number of operations. And the number of operations that exists in the group is kind of a measure of the symmetry of the molecule. And so for C2V, there are only four of them. We have the identity, we have the 180-degree rotation, and then we've got these two reflection planes. And so that's kind of all there is. All right, we're going to come back to what all the rest of this stuff is, but let's look at C3V now. So that's a molecule like ammonia. Question over here. We haven't gotten to that yet. We're going to come back to it. Okay, so right now we're just talking about the symmetry operations in the group. Okay, so if we think about ammonia, that has the identity in this group, which everything does. And then if we look at the next entry, we have 2C3. And so what that means is that there are 2C3 operations that you can do. So I have my ammonia molecule, and I have one of the hydrogen sticking out toward you, and the other ones are pointing off to the sides. And what the 2C3 designation means is that I can rotate this once, and that gives us an equivalent state as far as symmetry, but it's not identical to where it started out. And then I can rotate it again, and, you know, again, it's symmetrically equivalent, but if we could tag all of these hydrogens, so I mean, imagine that we can isotopically label them so that one's a proton and one is tritium and the other one's deuterium, so we can tell them apart. We have to go around the third time before we get back to the initial configuration. And this is an important thing. We have to be able to make a distinction between things that are valid symmetry operations, which this is, and being able to tell the difference between that and the original configuration. So that's why we have 2C3 operations, because we have 1, 2, before we get back to the original configuration. It doesn't mean that it has two separate C3 axes. Now, don't get confused, because in some other point groups, it might mean that something has multiple axes that are the same. But the important lesson here is that when you have something listed as, you know, that operation, so like sigma and sigma prime, that means they're not equivalent, but if it's called 2 sigma, then those, that's describing two operations that are equivalent. So then similarly, we have 3 sigma v, so remember a vertical plane contains the principal axis, and we have three of them, because we can cut through any of these bonds, and that gives us symmetry operation, and they're all equivalent to each other. So, yeah, question over there? The molecule that had the square on the bottom, and then something sticking up, it was in the C4v. Yeah, that's an interesting question. So you do have C3, C4, but only 2C2, right, because you can go, you know, one way and then the other way. Okay, so tetrahedral, we're not going to go through all of them, but notice it has a lot of symmetry operations, and that should fit with your intuition that a tetrahedral molecule like methane is more symmetric than these other things. Another important characteristic is the number that you get when you add up all of these symmetry operations, that's called H. Some point group tables give it to you, this one doesn't, so you have to add it up yourself, but that's something you can do. All right, question. So you said the 2C3, that means that you have to do 2C3 operations before you get back to the identity, like the original molecule. Before you get back to the original molecule, yeah. And you said, and does it matter which way you rotate them? You know, by convention, we usually do it counterclockwise, but, you know, no, if you did everything the other way, as long as you're consistent, you'd get the same answers. But for purposes of doing stuff in class, the convention is usually we do it counterclockwise. Yes? This example is the 2C3, and that other one, that square pyramidal one you had, is that one's 2C2? Well, so if you rotate about the principal axis, you can do the C4 three times before you get back to the first. And you said, yeah, if you do the C2, it's 2C2? Because in that case, if you rotated it like this, or like this, you would get, there were two ways to do it. There's two ways to rotate that one. So my point is just, you know, be careful, because there are these operations with coefficients in front of them indicating that you have multiple ways to do the same operation. And sometimes it means just that you can do the operation a couple of times before you get back to the original state. And other times it means that you have different axes or different planes that are equivalent. We'll see more examples of this as it comes up. I don't want to spend a whole bunch of time talking about every case, because it gets a little bit abstract. Let's wait and see examples. Okay, so now, what is all the rest of this stuff on the character table? That's a lot of what we're going to spend time on today and Friday. Okay, so these A's and E's and T's, those are the irreducible representations or the symmetry species of the group. And what those are, it's a complete description of objects that can behave in certain ways under these particular symmetry operations. And we're going to talk about that with some concrete examples a little bit later on. So, some things to know about them. The ones that are called A and B are singly degenerate. The ones that are called E are doubly degenerate. And the ones that are called T are triply degenerate. And then let's look at the other information that you get in this table, which starts to give you some hints about how you might be able to use this information. And that is we have things like X, Y and Z. We have X, Y, XZ, YZ. These are linear and quadratic terms, you know, in terms of the Cartesian coordinates. So, for X, Y and Z, right now you can think about that as either a little unit vector directed along the appropriate axis. Or you can think about it as a PX, PY or PZ orbital in terms of how it transforms related to symmetry. Those are very intuitive concepts for chemists and chemical engineers. So, it helps if you visualize it as an orbital. The X, Y, XZ, et cetera, X squared minus Y squared, you can think about those as D orbitals. They're going to have other interpretations when we get into talking about infrared and Raman spectroscopy later in the course. But for now you can think about these just in terms of orbitals. Okay, so you can start to see what's useful about this table. So, once you assign something to a point group, for one thing there's a limited number of objects that can behave a certain way under these symmetry operations. We have a complete set of symmetry operations to work with. And we can already see that we learned some information about how at least orbitals behave with respect to this symmetry. And this is already written down for you in the table. Okay, so having gone over that a little bit, we are going to switch gears and talk about matrices and how to make matrix representations of operators. And we're going to do a little review of how to deal with matrices. Hopefully this is review for everybody. If not, we're going to go over what you need to know about it. So don't worry. If you need a little bit of extra practice or background, please check out the Wikipedia page and or the Wolfram site on matrices and matrix multiplications, rotation operators, things like that. Okay, so if we have a matrix which we're going to call A, these entries are its matrix elements. And we can call those Aij. We'll see that kind of terminology a lot. So in this case, A11 is minus 3, A12 is 6, et cetera. That's just how we label them. And we're just going to go through a quick review of how to deal with matrices. So you can add them if they have the same number of rows and columns. And if you can do it, it's pretty easy. You just add up the individual matrix elements. And so here's what you get in this case. We just add the individual matrix elements and get these cells. So I know everybody's probably seen this stuff before, but it doesn't hurt to have a little bit of a review, especially since if you didn't really talk about matrix representations of operators last quarter, it actually makes your life quite a bit easier, I think. I think it's much easier to deal with operators in that formalism. Okay, so that's how we add them. That actually doesn't come up terribly often in the kind of things that we're going to do. Here's something that does. If you want to find the trace of a matrix, you just add the elements on the diagonal. And ignore everything else that might be in the matrix. It doesn't matter. We're just going to add the elements that are along the diagonal. The trace is also often called the character, which gives you a hint as to what the character table is about and why we're talking about this right now. So all of those ones and minus ones and zeros and twos, et cetera, on the character tables, each one represents the character of the matrix that corresponds to that particular operation for a particular symmetry species. And we're going to learn how to make our own, if not by the end of, if not today, by Friday. Okay, so the character is a lot of times given the symbol chi. In this case, it's seven. So that's a really important matrix operation. Fortunately, it's easy. We can also multiply them by scalars in order to do that. We just multiply each element in the matrix by the scalar. And we can take the trace of that one too. So again, pretty straightforward stuff, but it's good to go over it just in case. All right, let's talk about matrix multiplication also in case you haven't seen it in a little while. So when we go to multiply the matrices, I'm going to write this all out once. So we go through and multiply the row of the first row of this one by the first column of that one. So we get 1 times 5 plus 2 times 8 as the first matrix element in the new matrix. And then we just go across. So now we have 1 times 6 plus 2 times 9, etc. And we build up our new matrix like that. So pretty simple, but you have to double check because it's easy to make a mistake. How many people have taken Chem 5 or otherwise no Mathematica? It's a lot easier if you use Mathematica. So most of the examples that we'll do in class will be relatively simple and, you know, you'll be able to do it in your head fine enough, but if you have to do this for matrices of any size, use Mathematica. It makes it a lot easier. Okay, so here's what we get for this particular one. And it's also worth pointing out that matrix multiplications don't necessarily commute. So if we multiply these two things together and then we do it in the other order, you don't get the same answer. And, you know, of course this relates to stuff that you learned last quarter in quantum mechanics. A lot of operators don't necessarily commute and they can be represented as matrices. And we'll also see that in some point groups, symmetry operations may or may not commute. All right, so other things to look at. If the product of two matrices equals zero, that doesn't necessarily imply that either of the matrices has all zeros in it. There are different ways to get that. So that's our little review of matrix properties. Again, if you need more review than that, check out the Wolfram site and or Wikipedia. Wikipedia is a really great resource on things like this that are noncontroversial. Of course, things where there are differences of opinion, people can change it all the time and troll each other. Nobody really does that on sort of basic math and chemistry and physics topics. So it's a good thing to use as a resource. Okay, so now that we've talked about properties of matrices, let's start looking at how to construct transformation matrices for actual operations that we might want to do. And we're going to do it in two-dimensional space to start with just to make things easier. Okay, so the way we're going to do this is we're going to think about, you know, I want to accomplish some transformation and I'm going to apply it to a test vector, which I'm just calling alpha beta. And we need to think about what do we want alpha beta to transform into and then what matrix do we have to multiply by it to get that result? So if we want a reflection about the y-axis, remember, we're in a two-dimensional plane. So we need to think about what do we multiply by alpha beta in order to get it reflected about the y-axis? So of course, if we reflect it about the y-axis, beta isn't going to change and alpha is going to change sign. And so working backwards, we have to think about what do we need to multiply by that vector in order to accomplish our transformation? And as we're going to see, the matrix you get depends on what you're trying to do, what object you're applying it to, but we're going to talk about the cases of just doing this in two-dimensional and three-dimensional space. Okay, so what if we want to do a projection on the x-axis? So we only want to see the x component. So what do we have to multiply by alpha beta to get just the projection on the x-axis? Yeah, so I hear people following along, so everyone gets it, that's cool. All right, what if we want to scale it by 3? So we just need something that has 3's on the diagonal. So this is why I like group theory in these kind of geometric transformations because it really gives intuition into how we can set up matrix representations of different operators. The quantum mechanical operators, of course, are all linear operators as you learned last quarter, so they can be represented this way. But doing this with these geometrical things helps give us an intuition for how to use it before we have to get into more complicated concepts. Okay, so in general, if we have some vector and we want to rotate it, so we had our first vector r1 and now we move it into this position r2, if we just set up how we want to do this rotation, if we look at x2 and y2, we have r cosine alpha plus theta and r sine alpha plus theta and we can expand this out. And that gives us the rotation matrix that we need to be able to perform this particular transformation. Rotation matrices are something that we're going to see a lot. We're going to use them now when we talk about group theory, so hopefully it's clear how that's going to work and how we're going to use that quite a bit. We're also going to use them when we talk about NMR spectroscopy and look at how spins behave in a magnetic field. And really they come up in all kinds of different areas of chemistry and physics. It's a useful thing to know how to do. Okay, so having gotten this far, you have enough information to definitely do the practice problems which are posted online so don't try to write them all down right now. I just want to point out that that's there. So do go ahead and check these out online and try to do them for Friday. Okay, having looked at that, let's move on to three dimensions. So we talked about our little two-dimensional rotation matrix. Now let's look at this in three dimensions. And our basis is little unit vectors pointing in the X, Y, and Z dimensions. And notice I'm going to try to be really careful about telling you what basis I'm using. And if I don't, you should ask me because it's a really important question. That affects everything about the problem. So right now it's just our unit vectors. Okay, so what if we want to do a C2 rotation? So 180 degrees. So if we have our X, Y, and Z unit vectors, that's going to flip the signs of X and Y and leave Z alone. And so this is going to tell us what our matrix is. Yeah? Well, this is in three dimensions now. We were doing it in two dimensions before. Yeah, so that's, you raise a really important point, which is why I said that I have to be very careful to always tell you what the basis is that we're using because it changes everything about the problem. So, you know, if we're starting with, so before we were starting with a, you know, a two, we're starting with a 2 by 2 because we had a two-dimensional vector. Now we have a three-dimensional vector. Okay, well, I think the matrix is more locationally wrong because you have the 2 by 2 and the 2 by 1 you want to buy by. You have the negative 1, 0, 0, 1 and then alpha beta. And then it came out to negative alpha 0, 0 beta. So 2 by 1 times 2 by 2 times 2 by 1 goes to 2 by 2. I don't see how that works. I'm going to check it and write up, write up something about it. Sorry about that. It's just, I want to get through a little bit more of this before class and, you know, whatever is, is confusing we can, we can go over it later. Okay, so let's talk about our rotation matrix for C4. So this one's a little bit more complicated because we flipped the position of X and Y. We made X negative and again Z stays the same because we're rotating about the principal axis. Oops. And so that's the rotation matrix that we end up with for C4. And so what I want to point out is that here's what we get for a general rotation matrix about any angle. We need to put in the signs and cosines. And so in Cartesian coordinates, here are the general rotation matrices for some angle about the X, Y and Z axes. And these are things that are going to come up over and over again and we're going to use them. So again, you don't have to write it down right now. This is, you know, it's available. You can look it up. But they are going to come up and it's important. I also want to point out that the inverse of a matrix is the matrix that if you multiply a matrix times its inverse, you're going to get an identity matrix which has just ones on the diagonal and zero is everywhere else. Sometimes it's called I. If we're talking about the identity operation in terms of the character tables, we call it E. And if A represents some transformation, then its inverse which is called A to the minus 1 undoes it and returns it to its original state. And here that is written out. So, okay, that's pretty good as far as where I wanted to get this time. Next time we're going to tie it all together and see how to use this in terms of group theory. Yes.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 02. Molecular Structure & Statistical Mechanics -- Symmetry and Spectroscopy -- Part 2 Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:03:12 Examples of Point Groups 0:13:55 Example: Low Symmetry 0:20:02 Matrix Representation 0:34:27 Matrix Multiplication 0:36:46 Transformation Matrices 0:41:42 Rotation Matrix 0:44:07 Matrix Representations of Operations 0:45:36 Inverse of a Matrix
10.5446/18908 (DOI)
Hi. Good morning everybody. Today we're going to continue talking about partition functions and hopefully get a little bit more familiar with what this means and how we write them down and we'll do a few specific examples. Then next time on Monday we're going to talk about issues like what happens when your system of particles interacts. And let's see if I can get my mic to not make that noise. Maybe not. Okay. So the last couple of lectures will learn about what happens when our system of particles interacts. And then Wednesday next week the lecture will be given by your TA, John Mark, who is going to go over some examples of partition functions. I think it will be really great. It's his lecture debut. I will be in Washington, D.C. reviewing NIH proposals. So, you know, P. Chem is an important part of my job, but it's about a third of the job, so the rest of it is managing a research group and reviewing things like proposals and stuff like that. So how that works, how people get NIH grants is everybody submits proposals and then reviewers who are other professors review these things and look at factors like is there a well-laid out plan? You know, are there benchmarks for success? If it succeeds, is it likely to do something important? And then everybody has to go to Washington, D.C. and sit around and discuss these things and give them scores. So that is what I will be doing next week on Wednesday. So that also means that office hours next week are canceled because I'll be traveling Tuesday and Thursday, Wednesday. I'll be in D.C. So please use the Facebook page if you have questions. I'll be checking that relatively regularly. Your TAs will be checking it also. And I will have office hours during finals week. Our exam is Friday, so there's plenty of time to prepare. I don't know exactly when yet, but I'll be posting those later on. But I'll definitely have a bunch of office hours during finals week, so there will be plenty of time to ask questions. Anybody have any questions for me about stuff before we continue talking about stat mech? Okay, let's do it. All right, so last time we ended up talking about the rotational spectrum of HCl and how we get the intensities of different peaks. We looked at the relative populations between the ground state and the first excited state in this rotational spectrum. And so we looked at the fact that this relative population just depends on the degeneracy of the states and the energy between them. And a parameter that's really fundamental to this is the temperature. And so that's something that we're going to keep coming back to in stat mech. The internal energy that's distributed tells us about which states are accessible. And the parameter that's really fundamental to that is temperature. And in some ways, what's the really fundamental quantity is beta, 1 over kT. Okay, so we saw a specific example of how we get these populations. And we're going to come back to rotational spectra. But let's look at how we write down the partition function in a more general way. So we can write down our population of some state i in a relative sense, so relative to the total number of molecules in the system. And that depends on this parameter beta, which again is 1 over kT. So we've got e to the minus beta times the energy of the system. And then that's divided by the partition function, q. And the partition function tells us about how much energy is in different modes of the system. So what do I mean by different modes? It depends on context. It could be, we could be talking about different vibrational modes or rotational modes or translation of the molecule bouncing around in a container. All of these things could possibly be contained within the partition function, electronic states too for that matter. Most of the time we try to treat all of these things separately if it's possible, just because it's a pain to have to deal with all of these variables simultaneously and usually they don't interact with each other. So we try to separate them when we can. So in the previous example we were just talking about the rotational states. And we wrote down, you know, sort of justified in a hand-wavy way the relative population between two states. But now here's the real definition of the partition function. Okay, so we sum up over all the states the degeneracies and then we have e to the minus beta times the energy of each state. And we sum that up over all the states. So again, how is that, how is all the states defined? Well for something like an NMR system that's really easy. If we have a spin one-half, it's a two-level system. There's only two of them. There are, you know, other things that act like two-level systems, particularly if we're talking about electronic spectroscopy. A lot of times you'll only have really low-level excited states available and there's a well-defined number of them. So in that case we can make that kind of approximation. For things like vibrational and rotational states we might have to take something that looks like an infinite series to sum over all the states. So what you actually do here depends a lot on context and the particular mathematics of the situation we have. Okay, so that's how we express the partition function in a general sense. And again, what this is telling us is it's something about how the energy is distributed among different states and more specifically it's telling us something about how many states are accessible to the system at a given temperature. Alright, so let's go back to our rotational spectrum of HCl and write an expression for its rotational partition function. Okay, so we need the energies of all the rotational states and their degeneracies. And I realize my title of this slide is a little bit ill-chosen because that's of course an IR spectrum of HCl but we get the rotational energy levels from it. So the idea is there, just the terminology isn't the best. Okay, so we have the energy level, the energy of some level J is Hc beta times J times J plus 1. And remember in the context of the partition function we wanted to find the ground state to have zero energy just because it makes the math easier. So we know that there's some zero point energy, it's not actually zero but we define it that way in this context. We also know that the degeneracy of each level is 2J plus 1. And so here's the expression for the partition function. So there's not really an upper limit on the number of rotational states, you can just put more and more energy into the system and the molecule will rotate faster and faster and as we go up in energy the states look closer and closer together, it becomes more like a continuum but there's not really an upper limit so we have to sum over all these levels from zero to infinity. And then we plug in the expressions for the degeneracy of the states and for their energies which we know from having looked at this previously. And that can be evaluated numerically pretty straightforwardly using the experimental energies. So in other words you can count the peaks in the spectrum and you know see how many you can realistically see and plug in all the energies for these things and calculate a value for the partition function and you'll get a number. I want to just mention something here which is going to come up again in our example day and that is if you try to do this with rotational Raman rather than a pure rotational spectrum we end up with you might count too many states because in rotational Raman you get the same configuration of the molecule twice during every rotation. So again just keep that in the back of your mind we'll see that more during the example day. Okay so when we go to evaluate this we can look up the rotational constant for HCl and it's about 10.6 wave numbers and we can plug that into our expressions for the energy and if we take the sum of the first 10 terms so we're looking at the first 10 states in the rotational energy let's see what we get for the partition function and if you count the peaks in that spectrum you can see that we have not very many more than 10. So of course those are giving your transitions rather than the states but in other words this approximation is not perfect but we're seeing most of the states that are populated if we take the first 10 terms. Okay so I got these numbers out of your book it is really straightforward to calculate them you're just plugging in the energies of the different states. So if we evaluate this quantity that we're summing over for each of these states here's what we get. So we already did this we saw that the relative population of the first excited state to the ground state is about 2.71 and we know that that's because of the degeneracy there are more ways to be in that first excited state. And then similarly as we get up to the second excited state we have even more ways to do that and at this temperature which is 298 Kelvin the degeneracy is still dominating. Same thing as we go up to J equals 3 there are there's still more population in that state and then that starts to level off as we get up to J equals 4. And so at this point it should be really clear why the relative intensities of the lines in the spectrum look the way they do. So remember in the spectrum we're looking at the transitions between one state and another so we can't just map the heights of the intensities onto the populations of a particular state but it does tell us something about what's populated. And okay so we saw that this starts to turn over at about J equals 4 and the numbers the relative populations start going down again and then as we get up to J equals 10 we get something that looks like.08. Okay so if we add this up the sum of the first 50 terms is about 19.902 it was 19.90 for the first 10 terms. So we can see that there's really not more than about 10 states populated in this system at room temperature. Adding another 40 of them doesn't really do you much good. Of course this is going to change if we change the temperature. So if we cool the system down significantly what's going to happen is we're going to get a tighter distribution so there will be more you know more population in the most populated states and things won't be as spread out and also the maximum the maximally populated state will move to being lower. If we heat the system up then we're going to get something that's much flatter it's much closer to all of the states being equally populated. Okay so this also leads us to an approximation that we can use for these kind of things which is that the rotational partition function approximately equals kT over hC beta. And yesterday when we were looking at this spectrum this quantity was just kind of stuck up there that's why it's a reasonable approximation to the rotational partition function. And if you do that you get 19.6 in this case so it's not perfect but it does give you something that's in the right ball part and it isn't very much work. Yes? It's related to the population. So this quantity that we're adding up in the partition function so it's essentially a relative population relative to how many are in the ground state. That's right. Well so to get the actual population of the state you have to divide by the partition function which is telling you something about the overall population. So we're going to see some more examples later. Okay so what this is telling us is how many states are thermally accessible at a particular temperature. And so let's just think about some limiting cases to try to get a sense of this. So let's say the temperature is close to zero. We cool our system almost all the way down to zero Kelvin. There's not very much motion going on. And you know here beta is one over kT and that starts to approach infinity as temperature is close to zero. And so what that tells us is that everything other than the first term in the sum is going to equal zero. So they're all equal to e to the minus x with x equals infinity except for the first one. And that gives us something that's really intuitive. If we really cool the system down to where it's close to zero almost everything is going to be in the ground state level. And so I should point out that a lot of people who are really looking at this from a theoretical physics perspective or you know even from some other systems that behave this way in things other than statistical mechanics. A lot of people would say that beta is really the fundamental parameter rather than temperature. Because if we think about our standard understanding of how this relates to temperature which is that as we put you know as we have higher temperature that means that we have more entropy, more motion, things moving around. Our standard understanding of that means that we can't have a negative temperature. That's that seems to be impossible from our intuitive understanding of how molecules work. So we have some zero point energy things are cooled down to absolute zero and nothing is moving. And that's because we're thinking about this in terms of a particular type of physical system. That being molecules that are moving around. In other sorts of things we can have negative temperature and it's kind of a bizarre concept. So something that has a negative temperature you know you might think it's really really cold. It's not it's really hot. So let's think about a system that we have talked about that has a negative temperature. So if we think about our NMR system where we have an equilibrium, we have spins in the alpha and beta states and there's a little bit of an excess of the alpha state. And then we go to do an inversion recovery experiment. We give 180 degree pulse. We have our magnetization vector where there are more spins in the alpha than beta state. And then we give 180 degree pulse and now we have a population inversion. So now there are more spins in the beta state than the alpha state. During the time that the system is like that before it relaxes back to equilibrium it has a negative temperature. There's more there are more spins in the higher energy state than in the lower energy state. And that was one of the main reasons I think why Purcell got the Nobel Prize for the discovery of some of these NMR phenomena because physically that's a really weird setup to be able to put a system in. Recently some physicists were able to generate a system with negative temperature in terms of actual molecules. We'll talk about that more later. But when you think about these kind of issues and how we can have something where putting more energy into the system reduces the entropy then negative temperature is possible and that's one reason why you might want to think about beta as being the fundamental parameter in a thermodynamic sense rather than the temperature itself. So if we look at our NMR system it's an easy example because it only has two levels at least in the spin one half case. We have our two states alpha and beta and as the temperature gets close to absolute zero our partition function ends up being very close to one because everything is in this lower energy state which is not degenerate. Okay so we talked about what happens at low temperature. We talked about how we can get a negative temperature. Let's look at what happens when the temperature is high. One misunderstanding that people have sometimes starting out with this is you might think like okay when temperature is high then you have an excess of spins in the upper state. You don't. You have to set the system up in a particular way to get that. That's our 180 degree pulse where we end up with a negative temperature but just heating up the system doesn't do that. It doesn't make us have more spins in the excited state. So what happens when the temperature is high is that you tend toward equal populations in the states. So you have plenty of energy. There's less reason why it matters if you're in the lower energy state as opposed to the higher one. And our partition function tends to you just get the solution where everything is equally distributed. So in this case our partition function is going to go it's going to end up being 2 as our temperature goes to infinity. Okay so let's look at some concrete examples of how to write these things down. So this is something that you're definitely going to need to do. So I'm still working on some practice problems for this. I'll have them up at some point today definitely. I wanted to go beyond the ones that are in your book and give some more examples illustrating some probability ideas so I will definitely have those up later today. So one of the things that you'll need to do is be able to write down partition functions for fairly simple systems. We're going to do some practice problems where you have to use some infinite series and add up stuff for rotational and vibrational states. But you know those are a little bit more involved. It's good to work them out to see how it works. As far as being able to do it on the exam systems like this are more realistic. Okay so we know our relative our expression for the population of a state i. And let's look at how we write the partition function for things that are defined in terms of a small number of states. Okay so let's say we have a two level system where the lower state is non-degenerate. So there's only one way to get the lower state. And the upper state is doubly degenerate. And so the first thing that you need to do to be able to solve such a problem is you know look at the description and words and be able to write an energy level diagram for that. So our lower state is non-degenerate. The upper state is doubly degenerate. And then it's important to remember that we always define the energy of our ground state as zero in these kinds of problems. So it's completely general. It doesn't matter what kind of system it is. It doesn't matter at all. We define that one as zero. And we also said that in this case the energy of the first excited state we're just going to call epsilon. And so then we can write down our partition function. And remember we have this, we have to stick the degeneracy in front of it and then we have this Boltzmann distribution looking thing for the energy of the states. And so in this particular case we just get 1 plus 2 times e to the minus beta epsilon. So this is definitely something that you should know how to do. If you have a description in words of some system that contains a small number of states with degeneracies given you should be able to write down its partition function. Okay, so let's talk about specific contributions to the partition function. Yes? What was the degeneracy equation in this one? Because if you set the e to not zero then you get the e equals one. So then that's the 2j plus 1j equals zero. But for the second one if you say j equals one then 2j plus 1 equals three and that's times that the exponent to the negative beta e. So where did the equation g equals 2j plus 1 come from? It's for a rotational system of a linear molecule. In this one did I say it's a rotational partition function at all? It's just really general, right? So I just told you the degeneracy of the bottom state is one and the degeneracy of the upper state is two. How do I know that? Who knows? Who knows what it even is? It's just very, very general. So don't get excited about using the specific rules for certain things if you're not looking at that situation. It's an easy mistake to make. And that's why we're going to go through a few examples of different kinds of partition functions. But yeah, in this particular case you don't even know what it is. So how we got the degeneracy of the states is unknowable. Okay. So here is a general partition function or at least the energies involved in it. Okay, so if we sum up all the energies, the partition function is with respect to their exponentials so we're going to multiply those all together. What's going on here with these different contributions? Okay, so we're looking at all the degrees of freedom with respect to motion that our molecule can have plus also the electronic transitions. That's not really a motion exactly. But it's convenient to count it anyway because it's something that we often have to worry about for molecules. Okay, so we have a translational component and a rotational component and a vibrational component and also an electronic one. And I mentioned this before but it's worth bringing up again because it really makes your life easier. If it is at all possible we only want to look at one of these things at a time because usually they don't interact and it just makes the math a lot easier if we're only adding up one set of degrees of freedom at once. Also in context we usually only care about one of them at once because we're looking at a particular type of spectroscopy or we're analyzing some experimental data in which we would be unlikely to have all of these things going on at the same time. Here's an exception. If we have a system where more than the ground electronic state is excited so we have a lot of electronic transitions going on, of course we know from looking at electronic spectroscopy that when you excite various electronic excited states then of course all the vibrational transitions get excited as well. So when our molecule gets promoted to an excited electronic state that induces a bunch of vibrations and so if we have more than the ground electronic state populated then we can't separate those two. For most molecules that we're going to be looking at at room temperature only the ground electronic state is populated so that's a pretty good approximation for most of the things that we're going to look at and we can generally always separate the translational and rotational states. Okay so another thing that's important about this general kind of partition function is that it's usually not possible to solve it analytically. So this is where Mathematica is your friend. You're going to need a lot of numerical solutions. The examples that we're going to do in class are going to involve things where there's some approximation we can make. If you really get into doing stat-mac, you know if you do this in grad school, one of the things you'll see is you know when you start getting into more advanced problems there's you know there's always some little trick that enables you to make some approximation. It turns out that's kind of how you solve every problem. So there's either some clever approximation that you can make or you just brute force do it with numerical methods. Okay so now that we've seen how we write these things down in general and we've talked about how we want to keep them separated if we can, let's look at some specific examples. So the ones that I'm going through today are in your book. At least if I get as far as I think I'm going to get there are some at the end that are not but let's see how we do. Okay so these are examples that you've seen before in different contexts which is nice because you know the basic story. We can just talk about how it relates to the partition function. The first one we're going to look at is the electronic states having you do with a particle in a box. So this should be really familiar from last quarter. I know that you spent a bunch of time on this. So you know the energy levels for the end values of the particle in the box and you solved for these and you know what they are. The only thing that's different is as always we're going to define the lower, we're going to put everything in terms of the lower energy state. And so we can write, if we define the energy for N equals 1 is epsilon, we can write that epsilon for level N is N squared minus 1 times epsilon. Okay so now we're going to look at the translational partition function for our particle in the box. And so we've got N squared minus 1 times epsilon. We're going to approximate that as summing from 0 to infinity over e to the minus N squared beta epsilon dN. And so we're going to take the integral instead of the sum here because that's easier to deal with just mathematically. And so we can rearrange just for the sake of convenience just to make it easier to do. Yes? Yeah hang with me. See I'm rearranging stuff so that to make it easier to do. If it doesn't, if it's not clear by the end, we'll talk about it some more. Let's go through it. Okay so we're rearranging this to look at it in terms of x. So this is a one dimensional system by definition, right? It's a particle in a one dimensional box. And so we're integrating this over with respect to x. And so if we evaluate this integral, we get an expression for our translational partition function as a function of x. And yeah now that I'm looking at this, I think that the answer to Corey's question is I shouldn't have skipped so many steps in the beginning so I should have written up the sum you know from n equals 1 to infinity and then showed that we're transferring this to an integral in terms of x from 0 to infinity dx. But hopefully the answer makes sense. Okay so here's our partition function for the translational part of the particle in a box. And remember that beta equals 1 over kT. And so we can write our partition function as x over capital lambda where lambda is defined as this collection of stuff. Notice the mass of the particle is in there. And it has dimensions of length. And if you check out the translational partition functions that are in your book, I really recommend doing the reading for these topics definitely before coming to class on Monday. This, it's particularly dense in the book. In this chapter there are a lot of examples. There's a lot of stuff going on. It's useful to look at them. So this lambda is a quantity that's going to be important for translational partition functions in general. And one of the things that's in your book is an extension of it to three dimensions. And if you work all of these things out you'll notice that it has dimensions of length. And it's related to the de Broglie wavelength. And so what that means is the partition function increases with the length of the box and the mass of the particle. Which should be consistent with your intuition about how this works. So you have your system with the particle in the box and you know that it behaves more in a quantum like way for much smaller particles and for smaller areas of confinement. Whereas when you get to a longer one dimensional box or a heavier particle then it behaves more like the classical system and the levels are closer together, it looks more like a continuum. You get the same kind of answer for doing this in terms of the partition function. So remember getting a larger quantity for the partition function means that more levels are accessible to the system at a given temperature. So it looks more classical for heavier things and larger boxes. Okay so that's one example of a translational partition function. It doesn't have to be for something like a particle in a box. You can do this for just particles moving around in a flask. It's a little bit more boring for the classical systems because there's not much quantization going on. Almost all the levels are equally populated in that case. We can also do this for something that looks like vibrational spectroscopy. So we can write down a harmonic oscillator partition function. So here's our potential for a perfect harmonic oscillator. So our potential looks like a parabola. And then we have all of these vibrational states, the harmonic oscillator wave functions which are Hermite polynomials. They're equally spaced. And the harmonic oscillator levels are also non-degenerate just like the ones for the particle in the box. And we know that the separation between the levels is H, we can call it epsilon. We know that it's H nu. And we can define the lower energy one as being zero. So then the first excited state is epsilon. The second one is 2 epsilon, etc. And so we can start to write down an expression for the partition function of this thing. So we have our energies in terms of epsilon spacing between them. And we know they're all non-degenerate so that takes care of that term. And we can just start to add these things up. And we can observe that this starts to look like an infinite series that we know what it converges to. And so we can use this expression as the partition function for the harmonic oscillator. Okay, so how do we know that? Again, this is the case where, you know, you write down sort of how this is going and then use some approximations that you know or, you know, this isn't even an approximation if you have enough terms that the series converges to that. It's just, you know, recognizing what the math comes out as. And if you get way into stat-mac you get more experience doing these things for different systems. Okay, so we can play around with this a little more and look at what our infinite series converges to. And so we have our partition function for the harmonic oscillator. And we can use this to get some relative populations. So the fraction of molecules in some particular level with energy, epsilon sub i, again we get this by taking the, you know, taking e to the minus beta epsilon i over q, the partition function. And we can write out what this is. And again we get the pretty intuitive result that as we decrease the temperature only the lowest energy state is occupied. And it's kind of nice to look at some of these systems where the states are all non-degenerate because that gives us a really good intuitive feel for, you know, how things are just depending on the energy. Of course when we get into things where there is degeneracy that often wins. And so at high temperature, again our partition function goes to infinity. In this case, you know, we have this parabola that's going up to infinity. There's an infinite number of vibrational states that are excited. Of course in a real molecule that's not a realistic approximation, right? Because in that case we would have a Morse potential. Or eventually if you put in enough vibrational energy the molecule is going to vibrate itself apart. But in this idealized system we have an infinite number of levels. And the populations are going to tend to be equal at high temperature. So, so far we've looked at some examples for various systems that we're familiar with. Let's go back to a translational kind of problem and something that we've seen from general chemistry. Okay, so I've sort of alluded to this but, you know, here I've found some actual examples for it. Alright, so we talked about the Maxwell-Boltzmann distribution of molecular speeds for noble gases. So for ideal gases. And at some given temperature if we look at the speed distribution of these atoms, we see that helium, the one that's the lightest, has a really broad distribution. There's all kinds of different speeds going on in there. And it's relatively flat. Whereas xenon, the heaviest one, not only has a much lower average speed, but it has a lot narrower distribution of different velocities that it can have. And, you know, we think about that lambda parameter that has dimensions of wavelength. We can also get some relationships between that and the speeds of the molecules. So this goes back to the kinetic molecular theory of gases. And we can think about the effects of having heavier particles as being sort of analogous to taking the same particle and looking at it at different temperatures. So having heavier particles is going to look similar in terms of how it behaves as taking the same kind of gas and cooling it down. So we're going to save the actual details of that for next time. I just wanted to introduce it, give people time to think about it. Later today I will have some practice problems posted and I will see you all on Monday. Have a good weekend.
UCI Chem 131B Molecular Structure & Statistical Mechanics (Winter 2013) Lec 23. Molecular Structure & Statistical Mechanics -- Partition Functions -- Part 1. Instructor: Rachel Martin, Ph.D. Description: Principles of quantum mechanics with application to the elements of atomic structure and energy levels, diatomic molecular spectroscopy and structure determination, and chemical bonding in simple molecules. Index of Topics: 0:02:51 Rotational Spectrum of HCl 0:03:53 Molecular Partition Function 0:22:25 2-Level System Partition Function 0:31:12 Particle in a Box Partition Function 0:38:12 Harmonic Oscillator Partition Function
10.5446/18905 (DOI)
Right, let's pick up where we left off. We had just gone through Linus Pauling's brilliant exposition on hybrid orbitals, which are still used to this very day to rationalize a lot of things to do with structure and reactivity. But let's take a closer look, let's compare the hybrid orbital approach, which really says let's make pairs of electrons make the bonds with the molecular orbital approach, which says look, the way you have to do it is the way you did it with an atom. You have to put all the positive charges where they're supposed to be, and then you have to solve it. Then you have to get the solution, and then you can put in your electrons into the orbitals, and they go into the lowest energy once. This is a more delocalized approach to bonding because if you're including all the nuclei at once, then there's no reason why you should be drawing lines. That makes it a little harder to understand though when you're drawing structures and you're pushing the lines around, chemists love those lines because it allows them to write and rationalize reactions. But occasionally a reaction goes that doesn't seem to behave, and that could be an indication that maybe the theory of the lines is falling apart a little bit. Let's talk then in that vein about more delocalized bonding. In the MO picture, we don't make hybrid orbitals, so the orbital is used, but it's completely different. They must not confuse the two. In the molecular orbital picture, what we said we had to do was we could combine together atomic orbitals that had the same symmetry, similar energy, and overlapped in space. For tetrahedral molecule with a central atom like methane, we've got four carbon valence atomic orbitals, the S and the three P's. We have to know the structure. Why? Well, otherwise we'd have to move the things around and calculate the energy as a function of R and R2 and R3. We already know the structure. It's tetrahedral. What we want to understand is not the structure so much. We want to understand the bonding. How is it working? What's going on? We assume we know the structure. It's the same structure as Pauling's, but the description of the bonding is going to be different, and that's the thing that we're going to focus on. In the MO approach, you have to know the structure first. If you don't, you have to search through all possible structures. If you're doing that, that is a very, very, very long-winded and intensive calculation, although sometimes people do it just to see what might be the most stable structure if there's no experiment or it hasn't been made. Let's put our hydrogen atoms at the corners of the same cube that we had before, HA, HB, HC, and HD, and let's see what we're going to do with these four. The carbon is at the center, and with our four hydrogen orbitals and our four carbon orbitals, we get eight molecular orbitals because recall in the linear combination of atomic orbitals, rubric, we have the same number of orbitals at the end in terms of molecular orbitals as what we started with. Okay, now question is, which orbitals can we combine? The answer is rule two said they have to have the same symmetry for this molecule. What that means is that we cannot do what Pauling did because we cannot combine the carbon 2S and the carbon 2PZ or the other two P's because those have different symmetry. Therefore if we want to consider the 2S on carbon, we cannot combine it with any of the other atomic orbitals on carbon. We have to instead combine it with combinations of the hydrogen orbitals at the four positions of the cube. And that's completely different then because now we won't necessarily end up with four identical lines in terms of four localized bonds. We will end up with a tetrahedral structure, that's for sure, because we started with that, but our description of the bonding will be different. Now the game is this. We have the carbon orbital. Each of the four of them has different symmetry. We have to combine the four hydrogens so that they have the same symmetry as the carbon orbital. And that's pretty easy to do because if the carbon orbital is a big fat round S, then that means that these guys have to be as round as possible so they should just all be red and that should be one of them. And if the carbon orbital changes sign, like it's plus here and minus here, well we've got two hydrogens on the top of the cube, they should be plus. Two hydrogens on the bottom of the cube, they should be minus and so forth. And so there's a combination of hydrogens that goes perfectly with each of the four combination on the carbon and that's how we're going to do it. So for 2S we use 1SA plus 1SB plus 1SC plus 1SD. All of them as round as can be. And this then, if they're all red, the 2S is red and all these guys are red, this makes a gigantic orbital with no nodes that goes around all the nuclei and has roughly spherical symmetry and it's called A1 which is a symmetry label that lets the spectroscopists know what the symmetry of the molecular orbital is. And that name A1 has to do with its symmetry under the tetrahedral point group TD that you'll learn about later in the course. For now we're just going to treat A1 as a label that tells us which orbital we're talking about. Remember that less nodes is lower energy and so we've got this big red thing with this big fluffy thing sort of looks like a teddy bear with no nodes therefore A1 is the lowest symmetry. That's what we predict. Above it then are three degenerate because they're related by symmetry orbitals and there's one for each of the p orbitals on the carbon atom. And those three are called T2, T means three, a name that has to do again with their symmetry under the tetrahedral point group. The one and two have meanings but I don't want to go into the exact meaning now. These are the four bonding orbitals and they are not the same but there's four of them and they still predict an exactly tetrahedral structure not surprisingly because we started out with an exactly tetrahedral structure. The other four combinations have more nodes. I could take the S as red and I have to have something the same symmetry but I can pick them all blue. Now that's a disaster because now there's nodes in between the two nuclei so that one's no good and likewise I could pick the PZ with red on the top and blue on the bottom and I could perversely pick these two guys to be blue rather than red and then there would be destructive interference and again that would be a very high energy solution. Therefore we can easily see that there are four bonding orbitals so there's going to be four bonds, good, and there's four anti-bonding orbitals and since there's only eight electrons the four bonding orbitals are filled and the four anti-bonding orbitals are empty and that's almost always the way it works out when you do things correctly because not surprisingly nature finds its way into making the more stable configurations. Here then is an MO diagram. I have four, it's a little bit harder to draw the tennis tournament when you don't have just two players but I have four hydrogen 1s orbitals on one side and then I have the carbon orbitals and I have the 2s of the carbon lower than the 1s of the hydrogen, that's good and then I have the 2p higher. Well I'd have to know that but I can certainly figure that out by looking up the ionization energy of the atoms which is well known. And so I can order them like that and I have one very good combination, the teddy bear which is down at the bottom, the A1 and then I have the three that are identical, the T2 and they're at slightly higher energy. The key is there's no way that the A1 and the T2 have the same energy. They have to have different energies and that is an experimentally testable fact that we can look at. So we fill up the bottom with eight electrons, four from the hydrogen, four from the carbon, the same way we always do with molecular orbital diagrams. Once we calculate the diagram we fill it up from the bottom. And here is an actual calculation of these orbital contours from the excellent website of Dr. Stefan Imel. Here is the A1 symmetry between the carbon 1s. Remember that when we did lithium, when we drew those contours, there was hardly any overlap. What you're seeing here with the carbon 1s is exactly the same thing. The green contour of the carbon 1s is nowhere near the white protons that are sticking out like tinker toy spokes. And so the 1s is doing no bonding at all with the hydrogens. On the other hand, when we take the 2s, we get this big spherical look, it can't be exactly spherical because of the underlying tetrahedral shape, but we get this big green contour. And what's contoured here, the surface is drawn at the 90% probability that an electron in there is inside that region. And you can see that it goes around all the nuclei, the carbon and all four hydrogens. So these two electrons in this orbital have this whole big space to go around it. And it's a very good overlap with the 1s atomic orbitals of hydrogen. Then the next one up is the 2p and I've shown here one horizontal and the 2p, one side is, I apologize, I didn't pick the same colors that Dr. Imel picked, but let's say green is positive and blue is negative rather than red and blue. Same difference. Here there is a node, but the important thing is, is that the node is right at the carbon atom. So although there's a node, it is not in between the two atomic nuclei. What was bad with H2, with the anti-bonding, is that the node was between them so that there was no possibility of the electron being in there to glue the positive charges together. Here we've got one side building up density to hold these two hydrogens in and the other side building up density to hold these two hydrogens in and the node is right at the carbon where it doesn't do any harm because it's not between the bonds. If we look at the other two, they're exactly the same as this one, but they're just flipped. They're all degenerate. They have exactly the same energy because of symmetry. So there's another one that's in and out. Again, it has a node right at the carbon atom and then the final one, the third one, is just up and down. It's nice that he's drawn them with these different perspectives because as you draw them slightly differently and look at them, you get a much better idea of what these surfaces actually look like than if you just have one perspective. These are the three then that we said had T2 symmetry and these are the three other bonding orbitals. They have a node but it's not a bad one to make an anti-bonding orbital. Question is then, on one hand we have the approach where we first monkey with the carbon orbitals and we keep the hydrogens and then we make a bond to each hydrogen in turn. That's nice because that's how we might draw it on a piece of paper if we were in organic chemistry for example or just regular Lewis structure. On the other, we have the MO approach and we can calculate the energy of these orbitals so I just drew them qualitatively but we can calculate them and under certain approximations and we can make the approximation better and better if we want to do more work. We don't need to do any more work to say look, in the MO there are two different kinds of orbitals. There's one that's T2 and there's a lower one that's A1 and therefore what we do is we go to somebody who does photoelectron spectroscopy and we say can you leak in some natural gas and ionize it and tell us what the binding energy of the electrons are in this region and how many orbitals there are and if we do this experiment there is a difference because if there are four equivalent hybrid orbitals each with two electrons then they're all the same. We get one band in the photoelectron spectrum, pick an electron out of any of those and boom and you never ionize again. It's so unlikely that you ionize period that you don't ionize twice so we don't have to worry about that and then the carbon core electrons which are the only ones left are much, much higher energy so we don't have to worry about them either because we can pick the photon so that they aren't going to come up. We are going to put in x-rays for example and therefore if there's one band in the photoelectron spectrum, Pauling is correct and MO theory is washed up and if there's two bands in the photoelectron spectrum then MO theory even though it's a little more complicated and we don't have to stick like lines is more correct. And let's then have a look and this experiment just like the paramagnetism of O2 is definitive and of course it comes down in favor of MO theory otherwise we wouldn't have gone to all this trouble. Here on slide 666 is the photoelectron spectrum of methane and there are two bands and they are assigned to T2 and A1 and they even have about the right ratio of size. You can see that one looks like it's maybe three times bigger than the other. There are a lot of factors that go into the cross sections of photo ejection so you can't just say if there's three orbitals you get three times the area or anything as simple as that but nevertheless it does seem to make some sense. And if we look at it the vibrational progression because methane is a much more complex molecule and can really vibrate in a lot of different ways which is of course one reason why it's a very bad greenhouse gas because you put it up in the atmosphere and it can absorb infrared in a lot of different ways and then radiate it back down to earth. We can't resolve all the little things like we could in N2. They just appear as kind of a ragged like a porcupine envelope but nevertheless we can say it's the T2 really that's doing the lion's share of the bonding in methane and the A1. The big teddy bear is rather less important because of the less vibrational progression when we eject an electron from it. This then says look we have to prefer the MO theory to the theory of localized bonds even clever localized bonds with hybrid orbitals because now we have an experiment that's clearly indicating that one approach is better and it disagrees with the other one and that's what science is all about. It's easy to talk but you can talk all you want if somebody does an experiment and shows the ocean is rising then it's rising and it doesn't matter whether you want to say it's not or if it's measured you can quibble with the measurement but you first have to understand how the measurement works and in fact the measurements along those lines are extremely reliable even though the system itself is quite complicated. Alright let's talk about some bigger molecules. We don't want to just leave off with methane although to a physical chemist methane is a big molecule in a lot of ways and I think you can see why because they may want to know a lot of things about it that an organic chemist may not be interested in and an engineer just wants to burn it and get some power. So we can look at some conjugated hydrocarbons and so-called aromatic systems. I've never smelled an aromatic molecule that was aromatic to me like a rose but they're called aromatic and they certainly do have smells but they usually smell like gas or mothballs or something like that. So let's look at benzene, naphthalene and a few others butadiene so-called conjugated hydrocarbons and this will be kind of an interesting, another interesting way to simplify what could be an extremely complex if we did it from scratch it would just be terrible calculation and make it simple and leave enough meat on the bone that we can still come to some interesting conclusions about the stability of the system. First let's just consider ethylene, C2H4. Ethylene is an extremely important molecule in the fine chemical industry. It's the feedstock for a lot of things and a lot of people are spending time and energy to figure out how to make ethylene efficiently when we run out of other ways that we have been making it. Well we've got the 2S orbitals on the two carbons and the two P orbitals and we've got the 4 1S orbitals on the hydrogens so there are going to be 12 molecular orbitals and because there's 12 atomic orbitals and 12 is a lot but we can still figure out what they should look like. Just like methane, even though this is bigger we're going to have to start by putting the nuclei at their preferred position, 120 degree bond angles between the hydrogen and the carbon and then we're going to have to solve for what the molecular orbital energy should be by putting in electrons that are allowed to go over the entire nuclear framework and only if they don't want to go over here for their own reasons are they not going to. It's not that we're capturing them between two nuclei. Here then on 668 is the molecular orbital diagram for ethylene. We've got our four hydrogens again but now we've got two carbons and we've got all these states labeled from the bottom, sigma G, 2AG with symmetry labels, 2B, 1U and so forth and so on. We don't have to understand what all of them mean but many of them are overlapped between the S orbitals on carbon and the S orbitals on the protons and then there's two right in the middle there marked pi G and pi U that are just coming from the carbon and that's because those two are orthogonal to the plane in which the protons are so those two have different symmetry than what the S can be and therefore they only come from the carbon and they can either be this way, the two carbons or that way and of course only one of them is filled because there's one double bond in ethylene. There's a sigma bond and then there's a pi bond and so one of those is filled right up to where it's stable. They're slightly below the 2P level. We slot in all our electrons all the way up. What we find is that it's filled right up to that level that involves just the P orbitals of carbon and here is the photoelectron spectrum of ethylene. We can also get that and now the ethylene is quite complicated so these peaks that all have vibrational progressions are very, very hard to resolve and they may not be resolvable at all. It depends how carefully and how clever the experimenter is but at least they aren't so wide that they overlap with each other and we can't see them at all and they've all been assigned as you can see 3B, 3U and 3AG and so forth and they're all assigned and they all correspond to ejecting electrons from lower and lower in the molecular orbital diagram all the way to 2AG after that. Maybe you need too much energy and they didn't have such energetic photons. The symmetry labels again treat them just as labels for now. Don't dwell on them too much. Once you study group theory you'll know exactly what they mean. For us though it's the highest occupied molecular orbital, the so-called HOMO and the lowest occupied, excuse me, the lowest unoccupied molecular orbital, the LUMO that are going to be the most important thing. Why? Because if I'm going to be making a bond then it's the electrons right at the top of the cake that are going to fly off and go somewhere and make the bond. The ones that are held down further down in energy are not going to be likely to be the ones going first and likewise if some other atom is coming up, rimming with electrons and says hey take some electrons from me, where are they going to go? Well they're going to go into the lowest unoccupied molecular orbital because they're going to come in and they're going to go down to the most stable state and therefore the highest occupied molecular orbital, what it is and the lowest unoccupied molecular orbital, what it looks like are very, very, very important to understanding chemical reactions and therefore we can focus mostly on them and forget about most of the other ones and that's a key simplification. The highest occupied one I've labeled as 1B3U and the lowest unoccupied is 1B2G and they both involve the carbon P orbitals and so these two, the highest occupied and the lowest unoccupied only involve pi electrons, it's only the pi electrons that are involved and that is a key simplification when you consider the reactivity and structure of things with double bonds like ethylene. We can treat all the sigma orbitals as just like core electrons and since they aren't important, they aren't going to be participating, we can rationalize whatever structure they have in any way we want and usually the prescription is those aren't so important, we just say they're sp2 hybrids, it doesn't matter what we call them because they're never going to do anything, they're just going to be filled and we might as well take a simple approach like Pauling's approach and just say well they're making 120 degree and there's a sigma framework that's holding the atoms in place and then there's these delicate pi orbitals and for those we reserve the molecular orbital treatment because those are going to do something and so we want to treat them more accurately and because there's only two orbitals, this and like that, it's basically down to the same thing as back to H2. The fact that they're P rather than S doesn't matter, the math is exactly the same and we've already done it so we can leverage that now and just say we've got this perpendicular pi system with two P orbitals and they can either be in phase or out of phase and we can calculate what we want to do so it's just like H2 in this case. Our pi wave function then while we suppose the molecular orbital for the pi is C1, 2PZ1 plus C2, 2PZ2 where 1 and 2 are referring to the two carbons and C1 and C2 are the two coefficients yet to be determined but we know they're going to come out to be equal because the carbons are the same bisymmetry. We have to make a secular determinant because this is molecular orbital, we've got to calculate the orbital energies, remember we have the Coulomb integral and the exchange integral and so here it is H11 minus ES11, we have the overlap integral H12 minus ES12, we have the exchange integral and so forth. The determinant so this times this is equal to zero and since the two carbon atoms are equivalent then H11, that integral and H22, that integral are exactly the same so we can make those the same and H12 is already the same. If we wanted to make quantitative progress here and we wanted to proceed the way we have in a lot of other cases helium and hydride, we'd have to get out our integrals, our Thetas and sine Thetas and all that and go to work but we don't want to do that, none of that's too easy to do and we're only after some qualitative description of what's going on, we don't expect it to be quantitative, it'd be very hard to make it quantitative even if we calculate these pi orbitals so well by just leaving out everything else and saying forget it about the other stuff, we could be making some big mistakes, some big errors there. Let's simplify things and not do any integrals at all and that will be a very, very good method and that's called the Huckel approximation. In integral whatever it is is just a number by the time you integrate it, it's a number with units but it's a number and we're going to assume that we're going to just replace integrals with these symbols, numbers, alpha and beta and we're going to calculate things very quickly then. The Huckel approximation makes three assumptions and at first they seem laughable but in fact they're very nicely, physically motivated. The first thing is the overlap integral, remember how much trouble that was to calculate s and how we had to do all those things. Well here what we're going to say is this, we're going to say that the overlap integral is zero unless you're talking about the overlap to the same place. So the overlap even between two neighbors is zero and that seems very counterintuitive why because you're claiming they're making a bond but when you look at what came out of our analysis before s was just a player in the denominator that just changed things slightly in the denominator. It never dictated whether a bond formed or not. It was okay so it doesn't really matter if we set it to zero and if we set it to zero then we're getting rid of a lot of math because all those things go away because they're multiplied by s12. So those are gone and then when it's the same we set it to one because we're saying it's normalized and then all the Coulomb integrals, the H integrals are the same for equivalent carbons and sometimes we assume they're the same even if the carbons aren't quite equivalent because we're just too lazy to figure out if they're that much different or it's difficult to figure out what they would be. And then the exchange or they're also called resonance integrals in the business. Those integrals vanish except for nearest neighbors and the rationale for that is that if I'm here and you've got a p orbital then that integral is going to have some value but once I get out to here and I've got something intervening in between it's too far away. So I could calculate it but it'd be small and I don't want to waste all my time calculating something that is 0.1% of the answer but is extremely difficult to calculate. That's a poor use of my time. So those three approximations let us simplify our secular determinant and get rid of all the calculations. Conventionally you write alpha for one of the integrals in beta for the exchange and our secular determinant just becomes alpha minus e beta beta alpha minus e equals to 0. And if I, that's a quadratic equation and the solution for the energies is alpha plus or minus beta. Not surprisingly it's very similar to what we had with H2. There's a good combination and a bad combination. Now in fact there's two electrons going in and we didn't calculate any electron, electron repulsion or anything like that. So all we're going to do is double the energy when we say what it actually is. The advantage of this however is we can do something that's a little bit more intimidating, much more intimidating than helium for example. Let's try benzene. Benzene is C6H6. We would have a 6 by 6 determinant and it would have entries everywhere. So just expanding that out as this times the determinant of 5 by 5 will take forever. And that's going to be a major problem to try to calculate something like that. So let's figure out how to get our 6 PZ orbitals in the pi system to yield 6 MOs without doing that. Let's set up the determinant then which I've done here on slide 676 in the Huckel approximation. The nice thing is except for terms right near the diagonal in the array each of which has beta when they're off because they all have the same value. And the 6 carbons all have the same integrals for the H11, H22 and so forth. So those are all alpha. And then I have my energy, the overlap is gone so there's no energy off the diagonal, that's very convenient. Everything's zero everywhere except for the top and bottom and that's because as I go around the structure 6 is next to 1. So you always end up with 1 up there. But all the rest are zero so most of the thing goes away. And we can even make it simpler by since we've got zero and we know beta's not zero we can divide everything by beta so that the certain things are 1 if they're off the diagonal. And then we can just redefine alpha minus E divided by beta to be some variable. So let's let X equals to alpha minus E divided by beta. And then all the other terms are 1 then we end up with the following polynomial to root. If you expand out the secular determinant you end up with a sixth order polynomial. It's X to the 6 minus 6X to the 4th plus 9X squared minus 4 equals zero. So that's why you took a course in algebra in high school because now this should be a duck soup problem. In fact it may not be so easy if you just stare at it but there's a trick. I'm going to let you explore the trick on the homework. Because it's only even powers we can redefine Y is equal to X squared and then we've only got a cubic. And if we're lucky we might be able to factorize a cubic and figure out how to actually get the answer. Or we can plot it even for lazy in Mathematica and we can see where it crosses zero and if it crosses zero at some convenient place that is easy to tell because it's an integer then we can try factorizing it with X minus that integer X naught or N naught and see if it factorizes. And if you do that and you're persistent you find that you have repeated roots because of the Y equals X squared and then some of them are repeated again just because of the structure of what you end up with at the end in the cubic and you end up with X is equal to plus or minus one or plus or minus one or plus or minus two. Those are the six roots of that equation. Well X was alpha minus E over beta and therefore there are the corresponding energies E1 which is the best one alpha plus two beta recall that these are usually negative. E2 is equal to E3 is alpha plus beta. E4 is equal to E5 that's alpha minus beta and then E6 is alpha minus two beta. So there's nice symmetry about how the energies come out. Now how do we get what the molecular orbitals are? All what we had to do before we go back we actually put in the energy to the original equation that we had and then we find the orbital that we get that corresponds to that energy. The degenerate ones that have E2 and E3 the same and E4 and E5 the same require a little bit of thought to how to sort those two out. But whatever orbitals we get recall that they have to be orthonormal. The molecular orbitals have to be orthonormal as well. So that's an important thing that lets you simplify it. But I'll let you do that and here I've given you the answer. The first one not surprisingly the lowest energy molecular orbital for benzene is when all the six recall benzene as a hexagon planar is when all the six p orbitals have the same phase. So I get a ring of positive electron density around the top holding all the carbons together and I get a ring around the bottom of the opposite sign but when I square it I find I build up density in between the carbons even though they're off axis and that holds them all together. And then the next two have some nodes but the nodes aren't bad. So those are between at certain carbons and the orbitals are just like methane they hold the other parts together. So there's one that has a node this way that holds these parts together and there's one the other way that holds these parts together. And then there's the bad three that have the opposite combinations basically and the highest one is when this one's up and this one's down and this one's up and this one's down and so forth and that has nodes in between everything and tons of nodes and you can see it's a very crinkled wave function that's going to have very high energy and that's the least favorable one. That's a strongly anti-bonding orbital. And in fact by drawing simple figures just based on the cyclic structure you can arrive at some simple rules to predict aromatic stability. You draw the structure point down and so for hexagon you draw point down and then at each vertex you put a line which is an MO and you have one on the bottom then you have two that's E2 and E3 then you have these two that's E4 and E5 and then you have one at the top that's E6 and you've only got six electrons because you only had one electron in each p orbital to start with so you fill up the bottom to orbital with two this one with two this one with two game over. So you pick the three good ones the three winners and you left all the anti-bonding ones unoccupied and all the electrons are paired and all these orbitals are delocalized so we don't predict any difference although in the Lewis structure of course for benzene we draw a single bond and a double bond because we have to because of the Lewis structure and then we say well there's resonance we move this here and move them around like a mouse trap and the molecular orbital we don't have to do that they're already set up to be delocalized. On the other hand if we pick cyclobutadiene and we stand a square on its points then we have a lowest one E1 then we have E2 and E3 and then we have E4 but we have four carbons in cyclobutadiene and therefore we've only got four electrons so we put two in the bottom but then we've got two equal ones here well that's like O2 again so one goes here and one goes there. Well that's not aromatic stability that's predicting a di-radical I wouldn't take this kind of very qualitative thing too seriously I do some spectroscopy to figure out what you actually get but whenever it comes out like that where the two aren't paired and there's some left the conclusion is it's not aromatic and what happens then is the ones that have 4n plus 2 pi electrons become stable and the ones with 4n like 4 rather than 6 are not especially stable and not stable at all in some cases so only the 4n plus 2 pi systems are predicted to have aromatic stability and that comes out of a more detailed calculation to not just playing games with shapes but it's interesting to just play games with shapes because it's very quick and often it's good enough. Alright let's do a practice problem let's consider the following molecules and let's consider them and see which one is an organic chemist would predict to be aromatic so they had the special stability benzene was used as a solvent for a long time why because it doesn't react with anything so it's a perfect solvent if things won't dissolve in water or the reaction slow in water you dissolve it in benzene and the reaction goes very quickly problem was of course benzene is quite a potent carcinogen like a lot of these systems and so if you're breathing a lot of it and boiling it up and breathing it on a daily basis it's not the greatest and that's why they went away from it. Let's consider then benzene we know is aromatic let's consider naphthalene which is in moth balls that's that smell azuline which you probably haven't seen nor smelled and then cycloactatetraene let's take these three and see what they are well first thing is the names if you don't know what the structures are referring to the names are useless and we have to know they aren't going to be aromatic unless in the Lewis structure they would have alternating single and double bonds and we could do resonance and move the things around so that they would all be equivalent half the time single half the time double if they aren't even like that then forget it it's not going to be an aromatic system or the part that can't do that trick with resonance is not going to be an aromatic system and they have to have a cyclic structure as well so that the snake bites its tail because that's important in how those two last terms come in if those don't come in a lot of things change it turns out and then let's draw the structures and count them so for naphthalene we've got two benzene rings fused together now if we were actually going to do the MO treatment we'd have to be careful because these two carbons without any hydrogens on could have a different coulomb integral than the others and the others would all be the same and so we should take that into account we should have alpha 1 and alpha 2 for lazy we just say well they're about the same column alpha and I've drawn the Lewis structure here one of them and we could play a game with resonance then and draw one the other way we count the electrons in the pi system we forget about the sigma we forget about the hydrogen and we see there's 1 2 3 4 5 6 7 8 9 10 that's 4n plus 2 for n equals 2 and so it's planar and it is in fact aromatic for azuline which is a structural isomer of naphthalene there's a 7 membered ring glued to a 5 membered ring and I can still draw the double bonds the same way it's kind of amazing that it works like that and it's a beautiful blue color and again I count up it's a cyclic system I count up there's 10 pi electrons and the delocalized MO's go over the whole thing and it's planar and it's very nice example of an aromatic system that's not so trivial like a benzene one and then here what I've shown you is a mushroom that is blue it's absolutely amazing but in fact this mushroom makes an azuline derivative for some reason probably extremely interesting chemistry usually when plants make interesting molecules it's because they're keeping bugs off or keeping other things away because too bad for plants they can't move and if you're stuck and bugs are crawling all over you you need chemical warfare to keep them at bay usually then if you see a brightly colored mushroom like this you admire its beauty and its beautiful blue color but let someone else figure out if it's safe to eat there are people who do that for a living and it's very interesting to see how courageous they are they usually eat a very tiny piece and then wait and see if what happens and then they eat a little bit more and I'll let them do that but don't eat the mushrooms that you find around that are this beautiful blue color or the ones with the orange gills either because that might be the last thing you do. For cycloactatetraene I've drawn it like a stop sign with four double bonds but it only has eight pi electrons so that's not 4n plus 2 and in fact the system is not aromatic and it's not planar it has a 3d structure and looks much more like alternating single and double bonds that are localized so that one not surprisingly if it's not aromatic it doesn't usually have a trivial name. Okay we're going to leave it there and for our very last lecture what I want to do is go through where we started, where we got to and all the things we covered because we've covered a lot of ground from electrons and photons to atoms to molecules and we've done it in a certain semi-systematic way that I hope has made certain things that you wondered about much clearer and maybe caused you to want to learn more about some other things that you've never heard of so we'll leave it there and then sum up in the next lecture.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:01:38 MO Picture for CH4 0:13:53 CH4 PES 0:18:52 Delocalization 0:21:45 Ethylene 0:34:02 Benzene 0:43:44 Aromaticity
10.5446/18904 (DOI)
Well, the quarter is coming to a close and so is this series of lectures. Today what we're going to do is continue our exposition of H2 plus and move on to H2. We're going to optimize our molecular orbital treatment of H2 plus and then we're going to kind of throw in the towel because H2 would be at least as much work if not more work than helium if we did it from scratch. And so at that point we're going to adopt a more qualitative view of molecular orbital theory, the kind of view that people in the business often use when they talk about bonding in much, much more complicated atoms and molecules that have many, many electrons and nuclei all over the place. We'll also touch on a very important idea called configuration interaction and we'll see how that can influence the energies that we calculate and improve the results that we get. Well, where were we? At the end of the last lecture what we tried to do is we tried to include a Slater 2S orbital to improve the energy for H2 plus and what we found without going through the whole calculation but what the calculation concludes is that when we allow any amount of the 2S and 1S basically it decides on its own that the 2S is not very important, it doesn't include very much of it and the energy doesn't improve very much. And what that means in fact is that 2S was a very bad choice. I think you can kind of see why 2S might be a bad choice to include. It's a bigger sphere and H2 is more like, H2 plus is more like a sausage and so if we want to get something to be more like the electron density that we expect in that molecule we don't want to include another round thing and basically the calculation concludes the same thing. So whenever you include orbitals and then you get very tiny coefficients what that means is that you made a poor choice. If you were making them by intuition you made a poor choice and it's possible to make a bunch of poor choices even using a computer and spend a lot of computer time and then come to the conclusion that none of those orbitals contributes, none of those atomic orbitals or slater orbitals contribute significantly to the final result. And so the question is knowing that how can we make a better choice? And in fact this is quite important because even if you're using the computer to help you with a program like Spartan or Gaussian you have to make an intelligent choice about the basis set or the orbitals you're going to include in the calculation and if you include a lot of them that are irrelevant that's like going down every dead end street and staying to the right hand side and going through a whole neighborhood rather than taking the boulevard straight through the calculation to get to the answer. Well here's the idea. Supposing I'm a hydrogen atom and I have my spherical 1S orbital here and I bring up a proton, the electron is going to be attracted to the proton and so I expect that this orbital is going to stretch in that direction. In other words it's going to be polarized by the presence of the other positive charge and that stretching in one direction on the inter nuclear axis can't be represented very well by a sphere because a sphere moves out in every direction equally and therefore I don't want to use a spherical orbital. I want to use an orbital that's shaped more like this with some directionality. And because of that what I want to use is in fact a 2PZ orbital. That's what I have to use. There's no 1PZ of course. So I have to use a 2PZ orbital and that would have a plus and minus lobe and I could direct it so that the electron density built up in the direction of the other nucleus. Of course if I include 2PZ which has cosine theta in its functional form then my integrals get a lot harder to do because now I have this extra theta and so forth. But if I include this 2PZ in the original basis set I can improve the results a lot. The integrals get harder, they aren't impossible. We could certainly do them all together and if we had world enough in time we could do that sort of thing but I'm afraid that we won't be able to go through that in great detail and take two lectures just to simply do all those integrals one after another. Although it is fairly relaxing to occasionally decide to do a problem from scratch and go through it and understand every single thing about it. That's how you become an expert of course. So here's what we're going to do. We're going to try a trial function here on slide 597 of the form some coefficient times 1SA plus 1SB plus some other coefficient C2 times 2PZA plus 2PZB. We have to keep things equal between the A and B because of the symmetry of the problem but including these 2P orbitals lets us include polarization in the wave function. We know from experience how important this parameter zeta, the funny Greek squiggle is in the exponent and so we're going to just straight away use slater orbital variables. We aren't going to just stick to the hydrogen like wave functions because we've seen that every time we allow zeta to vary so that the things can stretch out a little bit that we get much, much better results. So if we're trying to get better results we might as well put zeta in. And then here are the forms for the functional forms for the 1S and the 2PZ that we're going to include. There's zeta 1 for the 1S and there's zeta 2 for the 2P and of course there's the R cosine data that we get from a 2P orbital. When we first start out though it's simpler to let zeta 1 equal zeta 2. That's more restrictive and then just optimize the coefficients C1 and C2 and then go back and optimize zeta. When you're optimizing the linear coefficients, the parts of the fixed things you're using, that's a pretty simple calculation to do. That amounts to linear algebra. When you start optimizing things in the exponent it's much, much trickier and usually it involves a lot more work. Oh, here's what we get then after quite a very long calculation. The minimum energy is minus 0.59907 in terms of Hartree and the optimum internuclear distance, R sub B, is 2.00 times A naught, the bore radius. And this is with C1 equals to 1 and C2 is equal to 0.161. So about 16.1% of the PZ included when the two zetas are the same is the optimum. And that's quite a bit and what that means is that we made a good choice and we can see we made a good choice because the energy improved and the radius was much closer to the true experimental radius. And of course the joint value of zeta is not 1, it turns out to be 1.247, so stretched out a little bit like it was before. There's an overall normalization in this to make sure that your whole molecular orbital is normalized. But the relative amounts are 1 and 600% and so to speak and 16%. And as I said, we get quite a bit of improvement by doing this. Now if we let both of them vary, then our integrals are tougher and we have these two parameters and we end up with a very long formula at the end. And then after all that, we have to optimize this long formula and find the minimum values of zeta 1 and zeta 2. And that's an even longer calculation but we can do it and it's not too bad, it can be done by hand even, it's not a big deal, it just takes time and patience. And if you do that, then you find that the minimum energy is now minus 0.60036, the radius is the same, c1 is 1, c2 is now 0.138, so 13.8% changed a little bit. Zeta 1 is 1.2458 which is close to the value that we had before and zeta 2 is for the p orbital is 1.4224. The exact energy is minus 0.6020. And so we are very close with minus 0.600. Of course, with one electron only, the orbital approximation is not really an approximation at all because there's only one electron. So of course we would expect to be able to do very well once we get the form of the orbital down. We could include more terms. You can say, well which ones would we include? Would we include a 2s pointing the other direction? The answer is no. That would do absolutely no good at all. Should we include a 3d that looks like a clover leaf with four lobes? The answer there is no. That's not going to help us at all if we put a function like that in and its coefficient comes out to be essentially 0. We could put a 3d z squared which is the unique one that looks a little bit like a p orbital with that ring around the center. And if we include two of those as well, and by that time we really do need a computer to do the calculation doing it by hand is just too much work, then we can get very, very, very close to the exact answer for H2+. And that's really reassuring because in these simple systems we better be able to get it right and we better be able to get it right to a lot of digits. Otherwise there might be something wrong with our whole underlying view in terms of wave functions and quantum mechanics. No one believes that there is anything wrong with quantum mechanics. It seems to predict everything very well even though it makes surprising predictions about things like the double slit experiment. The H2 molecule now, if I add another electron, that's the simplest neutral molecule that we can have. And it's just like helium now. With the two electrons we have the electron-electron repulsion. We're back to that 1 over R12 there to integrate. Only now we have to integrate it over a much uglier set of coordinates with the nuclei stretched out rather than being at one point. And so it's harder and I'm not going to go through that because it's too hard and I think you've got the idea now about how you undertake these calculations, even in a lot of detail. What we're going to do then is we're going to pretend that we can use orbitals that look rather like the ones that we had for H2 plus and that we can just slot another electron in. Recall that's what you do with atoms. You start with the hydrogen atom where you know the exact answers. And then you just start slotting electrons in. And of course what happens is the orbital energies get jumbled around and move around. But you still in the orbital approximation you think of a 3D orbital rather like something that you would find in hydrogen. Of course they may be pulled in if the atom is bigger and the nuclear charge is bigger. But qualitatively we still think in terms of these kinds of shapes and we still think in terms of shells like the radial distribution function that we found in a much earlier lecture. We know we can put two electrons into the same spatial orbital and the same bonding orbital that we found was the low energy solution. And the overall molecular orbital then is a slater determinant just like it was for atoms because we have to have the spins be anti-symmetric so that the total wave function is anti-symmetric. And what we can write then is we can write that the bonding orbital is a linear combination of two exponentials let's say if we just use the 1SA and 1SB orbitals. We normalize it according to the overlap S in the denominator here. And we get this formula then for the bonding orbital. And then the slater determinant consists of this bonding orbital and then the assortment of spin states. So we start with the bonding orbital and spin the electron 1 is up, spin alpha. And then the bonding orbital electron 1 is spin down, beta. And then the second row we're referring to electron 2. We have again the bonding orbital for the spatial orbital. Electron 2 is spin up. And the other final entry in the 2 by 2 slater determinant is the bonding orbital times the spin function for the electron 2 being down. And I've expanded this out on slide 601. And then this is what we have now. We have the bonding orbital for electron 1 for the spatial part times because remember that's how we do it. We always take a product times the bonding orbital for electron 2. And then we have the spin part which is the anti-symmetric singlet part that comes from taking the determinant. And in this case the spatial part is symmetric obviously. And the spin part is obviously anti-symmetric which you can verify by swapping 1 and 2 and seeing that the wave function changes sign. As far as we're concerned we are not doing anything with magnetic energy. The electron has so we aren't turning on a magnetic field. In that case the Hamiltonian in other words the operator for the energy of H2 does not have in it any spin dependent terms. It doesn't have any energy which would indicate that spin up is different than spin down. If we turn on a magnetic field then there are energy terms that depend on whether spin up or the electron is spin up or down. And in that case we have to be careful because we have to include those terms in the calculation. But for us we don't have to do that. And so the spatial part of the Hamiltonian which is what we're going to focus on mostly is just a product of the molecular orbitals 1SA1 plus 1SB1 or we could make it more elaborate. We can always expand each starting molecular orbital that we start out with times the same thing for the second electron. And I've called this whole thing then psi MO which will be the spatial part 1 over 2 times 1 plus S. And then this product of these orbitals. If we do this our molecular orbital picture predicts a bond that's not too far in error although it's by no means perfect. The dissociation energy is not perfect. The bond length is not perfect. But we do pick up, we do predict a bond for H2 and so it seems like everything is okay. We can solve for the total energy as a function of R with these two electrons in just like we did for H2 plus. We treat R as a parameter. We move the nuclei, we fix them. We solve for the lowest energy. We put a point there. We move the nuclei again. We solve again and so forth. And it's just the same thing as we did for H2 plus. However, if we do this something pretty odd happens. As the atoms get farther and farther apart, the energy of H2 does not go to the energy of two isolated H atoms because that would be minus 1.0, minus a half for this hydrogen, hydrogen A, minus a half for this hydrogen, hydrogen B. That's not what happens when we take this molecular orbital which is our simplest one. And therefore, something's wrong because for atoms we really looked at predicting the correct ionization energy as part of the measure of quality as to whether we had the right idea or not. If we couldn't predict the ionization energy, then we think we were something was wrong. And in fact, something is wrong, but what's wrong here is a little bit more subtle. It seems like for some reason, because as I'll show you in a second, the energy is way off that we've done something inadvertently wrong because there's something about it that we still don't quite understand. So it seems like when we look closer, we've still got sort of a structural flaw in our approach. And this is a very important structural flaw to appreciate. If we take Zeta equals 1, the minimum energy of H2 is minus 1.0991, hard tree at the minimum radius of 1.603 A naught. That's if we do everything properly, do all the integrals, optimize it, that's what we get. If we let Zeta vary, of course, it expands a bit like it always tends to, and we get Zeta 1.193, the minimum energy now improves to minus 1.1282, and the radius improves to 1.385 Bohr radii. The accepted values are minus 1.1744 for the energy, and 1.401 for the internuclear separation, R sub e. The R sub e we predicted by optimizing Zeta is smaller, but keep in mind, the variational theorem refers only to the energy. It doesn't say that the internuclear distance always has to be greater than the correct one. It just says the energy is always higher than the correct one, and so there's nothing wrong with predicting a tighter arrangement of nuclei, that's nothing to worry about. However, if we take our same equations and we just let R go to infinity, boom, out like that, then the energy goes to a limiting value of minus 0.7119. That's what we get, and that's way off minus 1. Something is wrong. We're predicting an elevated energy in the limit, and we have to take a look at that, because that's way too far in error to just brush off as something, and in fact, even if we make our orbitals much more elaborate but keep the same structure, we still get a very bad answer for the bond dissociation energy, if you like, for H2, taking H2 and producing 2H atoms. It must be then that we're doing something that is different than that, and to see what we're doing here on slide 605, what I've done is I've expanded out all the terms. We have the product of the electron 1, and it's MO, and then electron 2 in its MO, and here's what we get. We get 1SA1, 1SA2, and then we get 1SA1, 1SB2, and then we get 1SB1, 1SA2, and then we get 1SB2, 1SB1, 1SB2, excuse me. What do these four terms mean? Well, let's just look at them. The first term, 1SA1, 1SA2, what does that mean? Well, that means both electrons are on nucleus A. That's what that term means, and that means there's none on B. This first term, which is 25% of the total, is two electrons over here and none over here. And then what we've got in the middle is we've got one electron on A and one electron on B, and then we've got one electron on B and one electron on A just the other way around, because we're labeling the electrons, but that's the same. And then the final term here, which is another bad one, is two electrons on B and none on A. Well, this is the form of our molecular orbital, and we can interpret these things when we square them up as the probability of the electron, and if we let the two atoms get very far apart, of course, when they're close together, everything's okay. All the wrinkles are smoothed out, but when we let them get very far apart, it's quite a different story, because now we've got a big problem, and the big problem is this. We're predicting four possibilities when we dissociate H2 using this particular wave function. We're predicting we either get H minus and H plus, or we get two H atoms, those are the two center terms, or we get H minus and H plus, and the dissociation of H2, what we want to think about is producing two H atoms, not producing these weird excited states hydride being barely bound and then a proton on the other side, and yet these come out in our wave function in our molecular orbital, and therefore the problem is that the way we've set this up, which seemed to be a very good way, the most obvious way to do it, is no good because it includes too much ionic character. It says half the time you get H plus and H minus, and the other half of the time you get two hydrogen atoms. That's not in fact what we want to happen, but if we know that then we can rationalize why the limit comes out wrong, and the way we can do that is, well, we sort of had the foresight, you might even think it's planned, to calculate the energy of the hydride, anion, earlier on. So we can go back to our notes on that, and we've got that down pretty well, so we can take that. The energy of a baroproton at infinity, at rest, has no electrostatic energy at all because the potential energy is zero and the kinetic energy is zero, it's just a baroproton. And so we can figure out what's going on. The optimum value of zeta is 1.0 for a hydrogen atom because that's where it came from, but for the hydride anion, we found out with a variational approach that the optimum value of zeta was 11 over 16. So there are two of each form, a hydride, H2, 2H, 2H, and a hydride, and so the average value of zeta should be 1 fourth of 2 times 1 plus 2 times 11 sixteenths. If we work that out, then the zeta bar, if you like, the average value should be 27 over 32, which is about 0.84375, and that's in fact the optimum value of zeta as you go to infinity. That's the one that comes out of optimizing zeta as you keep changing big R. And also we can explain why the energy has that particular value minus 0.7 because the energy of a hydrogen atom in a slater orbital is zeta squared over 2 minus zeta, and we found for hydride that the energy in a slater orbital is zeta squared minus 11 over 8 zeta. That's how we optimized it, if you recall. And there are 2H atoms produced when H2 separates, 1H minus and 1H minus with the H plus having no electronic energy. So if I take 1H2, you get 2H atoms when it separates, but if it separates this way, you just get 1 hydride and nothing. Now you want to include equal parts of those. So there are 4 terms, but we only need 2. We don't need to do the other 2 because they're the same. So what I want to do to figure out the energy as this thing falls apart in this orbital is 1 half times, 2 times the energy of a hydrogen atom plus the energy of a hydride. And if I do that and I put in the zeta squared minus 2 zeta and 11 eighths and everything, what I find is that the energy of the MO as R goes to infinity should be zeta squared minus 27 over 16 zeta. But we know what zeta goes to because we just figured that out. We said, well, it should go to the average, 27 over 32. And so we go ahead and put in 27 over 32 into the formula. And what we get at the end after simplifying it is minus 27 over 32 quantity squared, which is minus 0.7119. Exactly what is observed when you look at the number. So sometimes numbers hide a lot. They seem to be just a number from nowhere, and yet here it is. It's exactly 27 over 32 squared, and it's no coincidence that it has that value. Well, we can let R go to 0, too. If we let R go to 0, what we're doing is we're compressing our H2 into a helium atom. This is the great thing about doing thought experiments is that you can do whatever you like and you can see whether it makes sense or not. Now, of course, what we better do is we better throw away the 1 over R repulsive term if we let big R go to 0. And that's because when they touch, we're assuming the strong force takes over. And there would have to be some neutrons, but they don't change our electronic energy at all. So we don't have to really worry about that. In that case, we have essentially a helium atom. And what we find is that our parameter zeta for the orbital goes to 27 over 16 as R goes to 0. And that agrees with the calculation for helium exactly. That's exactly what we found. So the problem with the dissociation is that we're including too much H plus and H minus in the products. And that's because of the way we set up the linear combination of atomic orbitals, molecular orbital. It's just too much. It shouldn't be that much. Now, as far back as 1927, Heitler and London were trying to explain chemical bond. And they wrote something which I'm going to call psi VB for valence bond. And they wrote just 1SA for electron 1 times 1SB for electron 2 plus 1SB for electron 1 times 1SA for electron 2. This seems very similar to the kind of trick we used when we finally got hydride to give an answer. And in fact, if you use this particular wave function, you can show that there is a chemical bond that exists as a pair of electrons, and this led to the so-called valence bond theory. And we're going to talk a little bit about that coming up in a more qualitative way. Because in a certain sense, it seems like whatever happened to the bonds, whatever happened to the lines between the letters and the arrows moving electrons and all the things that you might have done in organic chemistry, it seems like that stuff has sort of disappeared like the Cheshire cat. But it's coming back, don't worry, because we can actually incorporate our much more formal and insightful view of bonding to go back to a much more simplified view when that's expeditious. Usually computer programs are going to use molecular orbital approach. So the valence bond approach is mostly of historical interest, but it's not really used in modern computational programs. But it is very important because it does suggest how to improve. Instead of having the 50% ionic H plus H minus that we had with our molecular orbital, what we could try is a new wave function which is C1 times psi valence bond plus C2 times psi ionic. And then we could optimize C1 and C2 and get a better result. And we know how to do that by now with a variational principle. We've done that many times in this course already. In fact, though, it turns out there's a slightly different way of looking at this. And it turns out that what we ought to do, it seems a bit counterintuitive, but what we ought to do is include a bit of the anti-bonding orbital. Remember we had the combination plus plus and plus minus that had a node. If we include a little bit of the anti-bonding orbital in the mix in a scheme called configuration interaction or just CI, then we can get a much better result. That turns out to be rather similar to taking this valence bond and ionic approach. But let's just look at it in this way. If we include CI, then we can get the right dissociation limit for H2. So this solves our problem. Here's what we do. We write that psi, now I'll call it CI for configuration interaction, is equal to some coefficient times the bonding orbital for electron 1 times the bonding orbital for electron 2. Good, that's that part. That's what we had before. But now what we're going to do is we're going to expand our basis to include C2, some admixture of psi star anti-bonding for electron 1 times psi star, excuse me, sigma, I'm using sigma, not psi, for electron 2. And these are just because it's called a sigma bond and it's either bonding or anti-bonding. And I know what these sigma B and sigma A are. One of them is 1SA plus 1SB and the other one is 1SA minus 1SB. And I can multiply all these things out. And what I get then is C1 times 4 terms, the same 4 terms I had before, SA1, SA2, SA1, SB2 and so forth. Plus I get another term, C2, with the anti-bonding. And interestingly I get 1SA1, 1SA2. And then the others have the opposite sign. And then 1SB1, 1SB2. And that's just because of the symmetry. So the terms are very similar, a few of them change sign. And if I include more in my starting orbital and then I optimize the amounts of each, you might think, well if you pick an anti-bonding orbital, it's going to come out zero because after all we call it anti-bonding. But that's a little bit naive because as we'll see, that's not quite how it works because C1 and C2 depend on R. And so they turn colors and change as we change a part. And it's this extra flexibility of having C1 and C2 vary that lets us get out of this soup. So I'll just cut to the chase. If I simplify this admixture then of the bonding and the anti-bonding, what I get is compared to what I had before, I get C1 minus C2 times psi valence bond, what Heitler and London wrote, plus C1 plus C2 times psi ionic. So they're very closely related. I just get a linear combination of the two. But the difference here is that the coefficients themselves, C1 and C2, depend on R. When I change R now and I have my molecular orbital, I have to re-optimize every part of the molecular orbital. The mixtures that I allow, what percentages, the zetas, everything because what in fact happens when the nuclei change position is the electron magically and quickly, all of the Born-Oppenheimer approximation, instantly finds the optimum orbital on its own. We have to find it by clowning around a lot numerically. But in actual fact in nature, it's instantaneous and it doesn't take any great calculation to do. What we find then if we do this is as R goes to zero, C2 goes to zero. That's perfect because as R goes to zero, that means the anti-bonding part is out of our hair. As R goes to infinity, however, C1 goes to root 2 over 2 and C2 now goes to minus root 2 over 2. And what that means is that the second term, psi ionic, disappears and by substituting in the coefficients, what we can see is that this configuration interaction, wave function gives the correct answer for helium when R goes to zero and it also gives two hydrogen atoms and nothing else as R goes to infinity. And that's perfect. That means that we've solved our problem. Now the energy after dissociation is correct and we've got no more conundrum. Of course, we could start all over. We did this with 1SA plus 1SB. We saw before that even for H2 plus we should include 2PZ. We can go back, we can put that in, we can then take bonding and anti-bonding and so forth and boy, it gets complicated quickly but we get excellent results. So if you're willing to work harder, you get better answers. It's kind of satisfying. For the simplest calculation then wrapping up H2 with z that equals 1 and not optimizing it, we get the energy in what I'll call ECI for configuration interaction wave function is minus 1.11865 and the radius is 1.668A0. This is better than our first try with the bonding MO. If we optimize zeta then it increases and the energy becomes smaller minus 1.14794 and the radius, the inner nuclear distance is now 1.43. And if I put in more atomic orbitals, I essentially get the correct answer to as many digits as the experimentalists can tell you what it is. But of course you need a computer to do that because you just have many, many, many integrals to do and you can't hope to do them all by hand. And then you have, even after you get them done, you have a messy, messy optimization problem which you may need some fairly sophisticated methods to solve. But that all can be done and you get a very beautiful answer. So here's our qualitative picture. If we start with two atomic orbitals, we end up with two molecular orbitals. And we can summarize things then with this qualitative picture here on slide 617 which is a molecular orbital diagram which some of you may already have used without understanding exactly what it meant in detail before. We start out with the energy on the ordinate and we put lines for the energy of the isolated atoms. We have two hydrogen atoms so they have the same energy. But keep in mind if you have two different atoms like C and O, then all the levels are shifted because oxygen has a much bigger positive charge in the center which is pulling down orbitals. So you mustn't assume if you call an orbital 2PZ on oxygen that it has the same energy as 2PZ on carbon. They do not. But here out of symmetry they do. And then we take a good combination which is lower in energy than the two isolated atoms. And the bad combination, the anti-bonding, which is higher in energy than the two isolated atoms. And then we draw two electrons in. And to be strictly correct it's probably better to draw half arrows for the electrons so that a full arrow with a full point on the end means move two electrons because when organic chemists write their reactions, they're usually having bonds move sort of like a mousetrap snapping onto another atom and they denote that with a double headed arrow. So but that's kind of difficult because PowerPoint for example doesn't have a half headed arrow. So you have to make them by hand. I took the trouble to make a few by hand here that I've put in. The lines in between like the tennis tournament, all they are meant to qualitatively indicate is which orbitals are present in the linear combination that's giving either the bonding or anti-bonding orbital. And nothing more than that really. I mean to actually know the amounts, we have to do some kind of calculation. We can't just guess what amounts go where very easily. The energies themselves unless they've been calculated somehow and drawn for you, if you're just drawing them yourself, the energies are guesses and you have to keep in mind that they move around a lot as you add electrons in. So you mustn't just assume that a molecular orbital is like a static tennis tournament or something and then players defeat each other. It's not like that at all. They move around. The actual tournament itself moves around and sometimes things that were above in one combination go below in another. We'll see that in the next lecture. The bond order is an important concept when you look at the molecular orbital diagram. It's the number of bonding electrons minus the number of anti-bonding electrons divided by 2. And for H2 we've got two bonding electrons and zero anti-bonding electrons. So the bond order is the average or divide by 2 of 2 minus 0 which is 1. Therefore we would predict a single bond between two hydrogen atoms and that's why we draw one line. Usually if we're talking about double or triple bonds we draw 2 or 3 lines. For H2 plus where we only have one bonding electron, the bond order is 1 half. That explains why it's. Very weakly bound compared to H2. And for helium 2 dimer ion H2 plus where there's 3 electrons, there's now 2 in the bonding and 1 in the anti-bonding and so the bond order is again 1 half. For H2 there's 2 bonding electrons and there are 2 anti-bonding electrons and therefore the bond order is 1 half of 2 minus 2 or 0. If the bond order is 0 then what we're predicting is no bond. Basically two helium atoms trying to form a bond have about the same energy as two helium atoms without the bond and so why should they do it? They'd prefer to maintain their independence from entropy considerations. These are really qualitative pictures because as we add more electrons we have all those 1 over R12, 1 over R13, 1 over R23 and we never did any of those integrals. We just kind of kept the same qualitative picture here with these two levels and started pumping in electrons but in actual fact they'd move around and if we want to get it right we'd have to actually do some work and figure it out but if we make quantitative measurements so here are some accepted data. For H2 plus our configuration is 1s sigma g1. The bond order is a half and in fact what's measured is the bond length is 106 picometers and the binding energy in chemist units now of kilojoules per mole is 269. For H2 we have 1s sigma g squared and the bond order is 1. The bond length is 75 picometers and the binding energy is 458 kilojoules per mole. For helium 2 plus dimer we now have a 1s sigma u, that's the anti-bonding as well as the other one being full. The bond length is back to 106 picometers, that's probably a coincidence, I don't see why it should be exactly the same and the binding energy is back to a similar value. For H2 where we have both of them the bond order is 0, the bond length is listed as 6000 picometers and the binding energy is listed as about.01 kilojoules per mole. Well what that means is that physical chemists are very stubborn and if you say you can't make a bond or you can't make something stable at all they will try very hard and so it could be that there is an experiment where you cool down two helium atoms and you make sure nothing else is around and there's just those two and they have minor fluctuations of charge and so they kind of vaguely attract each other at this very, very long distance and people like to do that the same way they like to get a gigantic telescope and see how far they can see out into the universe. But that doesn't mean that the helium dimer exists under normal circumstances. It does exist in this very esoteric way with these very specialized experiments that have been done by experts but we never see the helium dimer under ordinary circumstances and that's why we consider helium to be a fairly ideal monatomic gas. Okay we'll leave it there. We're done now with our sort of very detailed exposition of bonding in these simple systems and of course to make it detailed we had to have a simple system because if we have three electrons on this atom and eight on that we can forget it. We're going to have so many integrals to do and it's going to take forever. And so what we have to do is learn how to zoom in on what's important and leave the rest of it aside, especially if it's going to be a ton of work. We don't want to end up digging a ditch with a teaspoon. That's a very bad idea. We just want to cut to the chase and say look here's the core, here are the valence electrons, here's what we have to calculate, here's how accurately we have to get it and here's qualitatively what it means in terms of structure, bonding and reactivity. So we'll pick it up next time. We'll actually get to the second row on the periodic table which will be exciting because there will be some chemistry aside from hydrogen and helium which is a little bit boring to stick with this long.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:03:28 Polarization 0:12:33 H2 0:15:50 Molecular Orbitals for H2 0:18:24 The Potential Energy Curve 0:20:19 The LCAO-MO Problem 0:31:50 The Valence Bond Approach 0:40:28 Wrapping up H2 0:45:53 Bond Order 0:47:28 Comparing H2+ Through He2
10.5446/18903 (DOI)
Let's continue on where we left off. We had the potential energy curves E plus and E minus for the molecular ion H2 plus. In this lecture I want to take a closer look at the molecular orbital we got. In first I want to figure out what it actually is. We started out with these two bases functions. We still need to figure out the coefficients. You can probably guess by the names E plus and E minus what the coefficients are going to be in the symmetry of the problem. But let's play dumb for the time being and pretend we don't know and then if we don't know we will have an algorithm to figure it out. And then we're going to look at some measure of the quality of the kind of orbital that we got. And that's a little bit of a deeper investigation that's going to lead us into something called the virial theorem which I believe was developed by Clausius of the Clausius-Clapeyron equation. Well, we have some experimental values but we won't always have experimental values for everything. But in this case we do. Let's then and also we have very extensive calculations and the variational principle tells us that no matter how big we make our basis set we're going to be above the ground state energy unless we have very bad round off error. So we have to do an accurate calculation. If you make things too big and you're too inaccurate you can end up below by mistake because of numerical round off problems. We end up with R in units of A naught being 2.49. That's what I said last time was about 2 or between 2 and 4 which is about 132 picometers. And the minimum energy is getting rid of the minus a half which is just the hydrogen atom. It's minus 0.064831 hard tree which is minus 1.758 electron volts which is minus 169.6 kilojoules per mole in units that chemists are more used to. The true values of course are better. The true values are that big R is just 2 A naught which is about 106 picometers instead of 132. And the energy is better of course. The true energy is minus 0.10264 hard tree minus 2.79 electron volts or minus 269.5 kilojoules per mole. This leads us to believe that, you know, we did okay but after all we took a very simple approach so it's amazing that it even works at all. And you can go back and ask yourself why it does work and what you will conclude is that it's only on account of that integral K, the exchange integral, that it can possibly be stable. You can test that out on your own as a little practice problem. Get rid of K and see how it goes. And you will see that it's only this non-classical thing where we have the two wave functions interacting with this operator in between that we end up with something that can be stable in fact. Now it's kind of ironic that after fiddling around with the secular determinant and so forth, which after all we wanted to do that in the first place to solve for the coefficients, the C's that tell us how much of 1SA and how much of 1SB is in the answer. We had this linear algebra problem. We said one answer for it to be 0 is both C's are 0. That one's a dud, pretty uninteresting. The other answer is the secular determinant and then we had those matrix elements. Then we had to work out what those were. Now we know what those are. So now we go back to our problem to figure out what these coefficients, which I'm going to call CA and CB are. CA is for 1SA, CB is for 1SB. Let's go back to the linear equations we began with and substitute in each of the energy solutions in turn because they will have different coefficients because they're different energies and they're different wave functions. The first equation we get is CA times HAA minus E plus CB times HAB minus ES remember those were our equations, equals to 0 but now we can put in E is either E plus or E minus. If I put in E plus what I get is what I've got on slide 597. This big equation CA, HAA, so forth and so on plus CB and then another thing equals to 0 and if I expand it out which I've done in the second line and then simplify it, what I find is that I have CA times S times HAA minus HAB plus CB times HAB minus S HAA which is the opposite equals to 0. So that means that CA and CB are the same because the other things are like 1 and minus 1. That means CA is equal to CB and that means that for the plus energy I have 1SA plus 1SB and so that's not surprisingly why we called it plus and so now here I'm writing psi plus is equal to some constant CA I don't need to use CB times 1SA plus 1SB. Now I still need to normalize because the integral of psi plus squared overall space should be 1 and I've done that here. The integral is easy because remember it's 1SA plus 1SB times 1SA plus 1SB. 1SA squared is 1 because that's normalized. 1SB squared is 1. BA and AB are S and we know S. Therefore what we get is the integral is equal to CA squared times 2 plus 2S which I can factor as 2 times 1 plus S. Therefore CA is equal to 1 over the square root of 2 times quantity 1 plus S and so I can write my final result. Psi plus is 1 over the square root of 2 times 1 plus the overlap integral times 1SA plus 1SB and I've tidied it up and I know exactly what the energy is and I know exactly what the minimum internuclear distance is and what the potential surface looks like. That looks great. With exactly the same sequence of steps but instead of substituting in the positive energy, I substitute in the negative energy, I get 1 over the square root of 2 times 1 minus S rather than 1 plus S times quantity 1SA minus 1SB and I have to choose which one's going to be plus and minus but it doesn't matter because remember the phase of the wave function doesn't matter. We usually choose it to be real if we can because we're very biased about that but whether there's a minus 1 out in front or not, it doesn't change the overall probability density and so whether it's 1SA minus 1SB or the other way around, it's the same thing. It doesn't change anything about the problem. The orbitals though here, I've drawn them, I wouldn't swear that these are 100 percent accurate but they're probably pretty accurate. When the two nuclei are far apart, there are two cases that I've got here in red and blue, kind of an angry looking red but two of them are in phase, 1SA, 1SB, they're both red or one of them is the opposite phase, a negative wave function. Remember that's nothing to do with charge density, that's just the phase of whether it's minus or plus and that's in blue and then if we bring the two red ones together we bring these exponentials, remember they have a cusp at the nucleus so they look like that like a little tent. We bring these tents together as they get closer and in between where they're both positive, they kind of make a catenary like a bridge or at a movie studio where they keep you out, a movie theater rather where they keep you out by hanging those valour things and then if I square it, then where it's big and it adds up when you square it, it gets bigger and that means in some sense that the electron is spending a lot of time in between the two positive charges which is perfect when you want to think of the electron gluing the nuclei together, that's where you had expected to be and you get this kind of sausage shaped thing. The exact shape of the sausage depends on how close you allow the nuclei to get. And then in the second line, one of them is blue and it's the outer phase combination. Now I've got two things but one of them is hot, one of them is cold and in the middle it's zero. As you bring them together, they just cancel out more and more in fact if you brought them right on top of each other they'd disappear. That's very bad and what happens is as they come together in between the two nuclei, it's always zero. In other words, there's a nodal plane and I've colored that the top part red and the bottom part blue in the third figure on the second line. And then when you square that, of course when you square it, they should both be red because when you square it, it's a positive number but what I've done is I've left one of them blue and I've squared it anyway to show that in between where they cancel, then when you square it, they really cancel in between. And that really explains why that one is repulsive because if it's canceling and as you get closer and closer, it cancels more and more and more, then all that happens is the protons see each other right up close and they hate each other because they've got a huge positive repulsion. They're both positive charges and that explains completely why that E minus curve just goes up and up and up and only when they're very far apart and there's essentially no cancellation, do they even have the same energy as zero of a hydrogen atom? The lowest energy solution with the atomic orbitals in phase is called a bonding orbital because it makes the atom, the nuclei stick together and it's given the symbol sigma to indicate that it has S symmetry, it's cylindrically symmetrical. It's the most like S, it can be and not be an atom and the other solution with the atomic orbitals of opposite phase is called an anti-bonding orbital and it's given the symbol sigma star. Star usually means excited or unfavorable or something like that and that's exactly what this means. If you put an electron in there, it's unfavorable for the future of the molecule. The bonding orbital has even symmetry and the anti-bonding orbital has odd symmetry. So the notation with our G and U might be sigma G for the one with even symmetry and sigma star U for the one with odd symmetry and oftentimes molecular orbitals are labeled this way to help you understand what they look like. The surface then of the anti-bonding orbital is purely repulsive. There's a node between the two nuclei and you cannot ever get the molecule to be stable with that root. Now let's have a look at how good our solution is. We know it doesn't match the experiment but what we'll find is that it's flawed in another way that we haven't even talked about. There's a powerful general observation from classical mechanics that relates the expectation value of the kinetic energy and the potential energy to whatever the force law is between the particles. If the force law is simple, then it's simple to figure out and you may not have had a proper course in classical mechanics. Maybe that's later on in the series and you never got to it. So I'm just going to tell you what this virial theorem says without deriving it. If we have a potential V of R that is something like a constant, let's say A times R to the N or N is some power, then the virial theorem says that two times the expectation value of the kinetic energy which I've called T here for short rather than K e is equal to N times the expectation value of the potential energy and that's equal to minus 1 times the expectation value of the potential energy because R potential with charged particles always has 1 over R. That's the potential. The force law is 1 over R squared. The potential is 1 over R. So now, if we've got a different kind of force law, then we end up with a different ratio of T and V. The question is, does our solution, our wave function, does that satisfy the virial theorem or not? And let's have a look because if it doesn't satisfy the virial theorem, then it's not very good and that could be another reason why it's not very good and that means if we can adjust it, if we can tweak it so that does satisfy the virial theorem, it's liable to be much, much better. This is kind of an independent check on our system because nowhere when we did this calculation did we say, oh, by the way, 2 times the expectation value of the kinetic energy should be minus 1 times the expectation value of the potential energy. This is coming in like an auditor to a company and just saying, how good are the books? And we couldn't fiddle the books because we didn't have anything to do with it. Let's then as a practice problem apply the virial theorem first to a classical harmonic oscillator because it came from classical mechanics. There was no quantum mechanics when Rudolph Clausius was proposing this. And then let's apply it to the ground state of the simple harmonic oscillator. Why? Well, we already had that wave function and we'll see if the quantum oscillator and the classical oscillator both satisfy the virial theorem. And then after we do that and we're confident that we know what we're doing, we'll apply it to our system and see how we do. OK, practice problem 29. Consider a classical harmonic oscillator. It has reduced mass M and force constant K. Does it conform to the virial theorem? Part B, how about a quantum oscillator with the same values of K and M? OK, here's our answer to part A. The total energy of the classical oscillator is E is equal to T plus V, which is 1 half Mv squared plus 1 half Kx squared. And of course, energy is conserved over time in the oscillator. It just changes form. If we take this and use Newton's equations, which is what we have to do, of course, to solve problems in classical mechanics, or if we get more advanced, we get into the Euler Lagrange equations, but we won't do that here. Force is equal to mass times acceleration. F is equal to MA. Acceleration is the derivative of the velocity. M dV dt. And V is the derivative of the position. So F is equal to M d squared X dt squared. But force is also equal to the minus derivative of the potential. So that F is also equal to minus dV dx. But dV dx is the derivative of 1 half Kx squared, which is just Kx. And so what we end up with is this equation on the middle of slide 604. D squared X dt squared is equal to minus K over M times X. Well, this equation ought to be really, really familiar. Because look, it's exactly the same kind of thing, which is some different variables as what we ran into with the particle in a box. We have the second derivative, and this is equal to itself. So it resembles the particle in a box just with a change of variables with time here rather than X. And X being sort of like the wave function rather than psi. But is it exactly? So let's suppose at time zero that the oscillator is extended to whatever the maximum it can be. Let's call it X max at time t equals zero. Or it could be compressed. Let's take it at the maximum. And let's assume that at the maximum that it's stationary. Well, it has to be stationary otherwise it would go farther. And if it were moving the other way, it couldn't have got there in the first place. So it's stationary. None of this matters to the problem. One iota, but it makes it easier to calculate it. Then the solution is X of t is equal to X max cosine omega t. And V of t is equal to minus omega X max sine omega t. Because V of t is the derivative of X of t. And omega, the angular velocity, is equal to the square root of k upon m. Next, we have to say what we interpret the average kinetic energy and average potential energy to be. The interpretation in classical mechanics and quantum mechanics is different. In classical mechanics, we interpret it to be a time average. Because the energy of this particle that has definite position and momentum at all times is changing between kinetic and potential, we want to take the time average. And we want to take the time average over one cycle. So you go out, you come back, you go out, stop. That's the correct average. We don't want to include anything else. So we want to integrate over 2 pi in the angular variable, or 360 degrees around a circle, same thing. If we do that, then the average of the kinetic energy is. Recall, if you're going to take the mean value of something, you have to divide it by the length. So it's 1 over 2 pi times the integral from omega t is equal to 0. To omega t is equal to 2 pi of dt times 1 half m v squared. And if I put in 1 half m v squared, I end up with omega squared, X squared max, sine squared omega t. I've got to do that integral of omega squared, sine squared by parts, or look it up. And if I do that and go through the whole thing, I end up with a sequence of steps that the pi's cancel out as they always do. And I end up with KX max squared over 4. That's half of the available energy. Because the most it can be is when it's stationary out at X max is 1 half KX max squared. This is half of that. Not surprisingly then, the other part, the expectation value of the potential is going to be the other half. And you could just assume that and you'd be right because energy is conserved. But I'm not going to assume that. I'm going to slug it out and calculate it so I can have more confidence in that I know what I'm doing. The expectation value of the potential energy is again 1 over 2 pi times the integral from omega t is equal to 0 to omega t is equal to 2 pi of dt, 1 half KX squared. X depends on t. I put in my X squared cosine squared omega t. I do the integral again and out comes KX squared max over 4. That's the other half. The average potential energy is thus exactly equal to the average kinetic energy for the classical harmonic oscillator. But what did the virial theorem say? The virial theorem says that 2 times the average of the kinetic energy is equal to n times the average of the potential energy. But n is 2 because our potential is 1 half KX squared. So then 2t is equal to nb becomes 2t is 2v and that's just t equals v. That's what we got. Therefore, that satisfies the virial theorem. Part b, for the quantum oscillator, recall in lecture 8, we solved that. Boy, that seems like a long time ago. We've covered a lot of ground since then. But here's the ground state wave function for the simple harmonic oscillator. I have m omega over h bar all divided by pi raised to the 1 fourth power. I had that funny thing to normalize it. And then a Gaussian function e to the minus m omega x squared over 2h bar. I've just written exactly the same thing in slightly different terms to keep in the m and the omega so that it's more comparable to the classical oscillator. But there's nothing different here. Okay, now how do we calculate the average of the kinetic energy and the potential energy here? Well, we know how to. We calculate the expectation value. That's what the angular brackets mean in quantum mechanics. In classical mechanics, they may mean a time average like we saw. But for quantum mechanics, we know exactly what to do. So the expectation value of t then is the integral of the ground state wave function. And then I have minus h bar squared over 2m times the second derivative with respect to x, again on the Gaussian. I take the derivative, down comes an x, and then I take the derivative of x e to the x and I get two terms. And I end up then with this term out in front, this big term with the square root of 2 times root pi in the denominator. And I end up with two terms. One is h bar. The other is minus m omega x squared, all times the Gaussian. I have to do that integral by parts twice. And then I get the answer. And the answer comes out to be h bar omega over 4. How neat, because we know that the zero point energy is h bar omega over 2. That was the energy that had to be there by the uncertainty principle. So it's again, it's half. The available zero point energy is assigned then to the kinetic energy in the ground state of the harmonic oscillator. Once again, we could assume, well, the other half's potential, but actually if you get good at doing these, they're kind of fun. And so why not do it? So here's the calculation of the expectation value of the potential. I put in 1 half m omega squared x squared. And that's integrating by parts twice, but I'm getting awfully good at that. In fact, I can get so good at that I can just do it on my head almost. So that turns out again without too much trouble to be h bar omega over 4. So for the quantum oscillator, the expectation value of the kinetic energy is h bar omega over 4. The expectation value of the potential energy is h bar omega over 4. They are equal. It satisfies the virial theorem once again. Good. Now let's check our wave function. For h2 plus, the virial theorem says with the minus 1 force, rather, potential function, r to the minus 1, the 2 times the kinetic energy should equal to minus 1 times the potential energy. Well, we've got our ground state wave function, side plus. Let's go ahead and do it. So this is now an independent check. And now you can go back to slide 526 to that online thing. And you can see that there's this cryptic thing off to the side, virial equals, and then its sum number that's very close to minus 2. And that's kind of their quality control, their measure, 1.9999 who knows if that's just round off. You sometimes get something like that. At the bottom here then of slide 611, I have the expectation value of the kinetic energy, T, is equal to the integral over dr, side plus, atomic units minus 1 half del squared, side plus. And that is equal to if I do everything because I have all the integrals, so I don't have to do anything, I just write it down. That's equal to 1 half minus s of r over 2 minus k of r, all divided by 1 plus s of r. So that's not any big deal because the derivative of e to the rA is just e to the rA. And so that's, and the other ones are the definitions of k and s. For the potential we have to integrate over psi plus of minus 1 over rA, psi plus, plus the integral over psi plus of minus 1 over rB of psi plus. And that one we end up with minus 1 plus j of r plus 2k of r divided by 1 plus s of r plus 1 over r. I know what all these functions are and both of them depend on the parameter big r because all these integrals depend on how close the nuclei are to each other. But so obviously it can't be minus 2 for all values of big r but that's ridiculous because what we want to know is at the most stable point does it satisfy it because that's what we're going around. So when r is equal to re, the equilibrium position, the minimum of the well, how close does it come to satisfying the virial theorem? Well we know the minimum is 2.493 times the Bohr radius and if we just put in r is equal to that value because we're in atomic units we just put in r is equal to 2.493. We end up with the expectation value of the kinetic energy being 0.3827 Hartree and the expectation value of the potential energy being minus 0.9475. So that's sad because at the condition, the best condition r is equal to re, we end up with this ratio which is minus 2.48 which is way off minus 2. So the auditors have come in and said you are missing a lot of money. In fact, we have about a 25% error with this function that we worked like crazy to get and it's still no good. Now the question is how we can fix it. Well the one thing that we didn't let the atoms do is we didn't let the 1s orbital expand and you could guess that in order to have both of them in the sleeping bag there that it's got to be a little bit bigger and therefore that's what I would try. Unfortunately, that means that we would have to go back and calculate everything with our friend Zeta. So although this orbital is not very good, we have to take a pretty deep breath. Before we do that. Now that you know about the virial theorem, you can go back to the other problems we did on because it holds for atoms as well and you can go back to our attempts on hydride and helium and even the 1s state of the hydrogen atom if you like. And you can figure out the expectation value of t and v. It's a very good exercise. And you can see now you've got an independent check. How did those other wave functions did? Because we were just playing around there looking at the energy trying to get it to be stable. We never went to this level. Well we're going to have to introduce our friend Zeta again to let the orbitals adjust to the new environment. And that means we're going to have to go back and do all our matrix elements over HA, HAB, S. I can hear you groaning even though you aren't here. And I feel your pain. So here's the new orbital. We've got 1 over the square root of pi e to the minus r. That was a simple one. That's just the 1s. Now it goes to Zeta cubed over pi all to the 1 half power e to the minus Zeta r. So called Slater 1s orbital where Zeta is a parameter. And we would have to go through them with this new thing. All our orbitals, all our matrix elements over and calculate them. But luckily we don't have to do that because somebody's already done it. And the Slater orbitals are already tabulated. And so all we have to do is set them up so we recognize whether it's S, J, or K and then wamp out the answer. And we can figure it out pretty quickly. And therefore rather than working them out blow by blow like I have done for some of the others, we're just going to look them up and we're as a function of Zeta. And then we're going to cut to the chase and minimize the thing as a function of Zeta. And even that won't be so very easy but let's have a look. Most of the terms are pretty predictable. So here on slide 616 what I've written is T, the expectation value of the kinetic energy as a function of Zeta and big r. And basically it's very similar to the other things. There's a Zeta squared over 2 minus Zeta squared and then S and K. Now the S and K are functions of Zeta times r because that's what's in the exponent so they're there together. But then there's some Zetas floating around by themselves. And for the potential energy it's again very similar. There's the one term plus one over r which is the repulsion of the nuclei. And that one doesn't depend on which electronic orbitals you pick because that's just dependent on the two positive charges pushing apart. So you could guess that one's not going to have a Zeta there. And now we've got these two things and we can add them up and then we can plot the energy E as a function of both Zeta which I've plotted on the Y axis on this slide from I didn't know what Zeta was but I guessed that Zeta would have to be bigger than 1 because I didn't see any mileage in having the orbital shrink. So I thought well it's going to be like hydride probably Zeta's bigger than 1 so I plotted it from 1 to 1 and a half and I didn't know what r over a naught should be either but because the true value was around 2 and the value I got before was like 2.49. And I thought if it improves it should get smaller so that the orbital should expand but the nuclei should be held together better. I plotted that from about 1 and a half to 2 and a half or so. And bingo I got a deep minimum. That little red spot which is almost in the center by coincidence. In fact I had to plot it again because I couldn't believe I was that lucky but I was and I found that the lowest energy was r over a naught is equal to 2.003 which is fantastic. It's like just about right on and that was 106 picometers then. And Zeta is bigger than 1. It's 1.238. That's the numerical answer if you minimize it. And the minimum energy now getting rid of the minus a half because the minus a half is in there is minus 0.08651 heart rate minus 2.345 electron volts or minus 226.3 kilojoules per mole. Way better than before. But now after we introduce this Zeta let's see what our auditor says. Let's see if the auditor says we made a good move. Well now I can calculate the expectation value of t with this wave function. No big deal 0.5865 heart rate and for the potential minus 1.173 and guess what it's minus 2.000. Right on the money so that we improved every aspect of our solution by letting the 1S orbital expand. That's kind of interesting because we might have taken another strategy. We might have just said look. We've got to find Zeta such that the virial theorem satisfied. Of course then we can't use it as a check because we've used it as the input. But we didn't do that. All we said is let's minimize the energy with this extra flexibility and the virial, the auditor came back and said the books are right on. Now that doesn't mean that that's the correct energy. That just means it's not wrong. And so lots of things are like that. It's not obviously wrong but that doesn't mean it's absolutely correct either. It's somewhere in between. There's lots of solutions that have the virial criterion met and they have different quality. How could we interpret this? Well starting with just the 1S wave functions is too restrictive. They have to be allowed to stretch to incorporate the presence of the other nucleus and that's how I would interpret it anyway. The energy is much better. It's still not perfect and at least the virial theorem satisfied. Now if we want to do better we know the prescription. We have to include other functions in the mix. And if you're doing this by hand you better think long and hard about which functions you're going to include because whichever ones you include you're going to have a mess to integrate and when you start including more of them you have many, many, many more integrals to do and so you do a ton of work and then it doesn't come out any better and I'm going to show you then in the last part of this, this minor sob story. So I'm going to pick something to add to the 1S that isn't a very smart pick it turns out. And next time we're going to learn why it isn't a very smart pick and how to make a much smarter pick. Well you might guess that sort of like hydride remember I said we could include a 2S, 1S, 2S, 1S, 2S. We didn't do that we included things with two values of Zeta but that'll work. And so what I could include here is I could say I'm going to take some coefficient C1 of 1SA plus 1SB and these are Slater orbitals so they already can expand. It's never going to be worse than what I had. Plus a second coefficient of 2SA, 2SB which again are Slater orbitals they just have an R and I really like that because if I have all the Slater orbitals they're all tabulated and I don't have to do any of the integrals. Before the advent of software that would do the integrals for you that was extremely important because there's no way if you had to do them all by hand you could make any progress. Interestingly enough here's a table. There's the wave function, here's E min and here's R sub B. The first entry is the very first thing we did. 1SA plus 1SB which I called 1S Zeta equals 1. The minimum energy including the minus a half now is minus 0.56483 and RE is 2.49. And if I expand it with Zeta is equal to 1.238 then I end up with minus 0.58651 and RE is 2. And now I do a ton of work and I get this funky wave function 0.7071 1S with Zeta is equal to 1.24 plus 0.001 2S with Zeta is equal to 1.24. And I get the same energy minus 0.58651 and I get the same equilibrium bond distance 2.00. That's really disappointing if we had actually done that calculation. But if you come up, if you say let's include a smidge of something in the wave function and the smidge is really like a spice at the end like just a pinch of salt 0.001. What that's telling you right away is that whatever you selected is not very important. It's not going to be on the top of a list of ingredients. What you want to do is you want to figure out what you can add that will end up with a reasonable coefficient more like 0.1 or something higher so you can get a clue that it's not going to be very good just by doing that. Next time then what we're going to do is figure out first by thinking about the problem a little bit more what kind of function we could add without necessarily going through all the calculations because it will be quite difficult. But what kind of function we can add that would improve it even more compared to the Slater 1S orbitals that have expanded and that will lead us to the idea of including polarization. It really comes down to the fact that S is still spherical and the problem is elliptical. So we wouldn't expect it to improve too much by just adding S because we already let a round thing expand in order to make the optimized orbital that satisfied the virial theorem. And we need something that's shaped more like a sausage which we'll see is going to be PZ orbital. And we'll do that next time.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:01:11 Comparing the Results 0:04:04 H2+ Molecular Orbitals 0:05:32 The Coefficients 0:06:58 Normalization 0:09:00 The Orbitals are Different 0:14:22 The Virial Theorem 0:28:27 Checking our MO 0:37:01 Optimizing the Energy
10.5446/18896 (DOI)
Hi, welcome back. Today what we're going to discuss is the hydride ion. We're going to continue our calculation on that. Recall we had just gotten to the point where we were looking at the electron repulsion term as a perturbation. And we're going to look at some other two electron systems. And we're going to be forced to work pretty hard today in order to try to reproduce any kind of properties that are known about these systems. And this is a very good test of the theory, of course, because you should be able to reproduce things like the ionization energy of helium and so on if your theory is correct. And you can take the approximations far enough. Recall here's what we had. We had an expression from first order time independent perturbation theory. And we were going to use it to compute the correction to the sum of the two hydrogen atom energies from each electron interacting independently with the positive nucleus. And so what we had is we had this sandwiched integral with psi star on the left and then 1 over r12. And then psi on the right. And the thing is the wave functions themselves are functions of the coordinates r1 and r2. And therefore what I have to do is I have to be able to express 1 over r12, the distance between them, in terms of only r1, r2, and maybe some other coordinate that I'm integrating over. Because I can't integrate a function if I don't know the functional dependence of the integrand. I can't just have some variable in an integral that's y and I don't know how y depends on x or if it does, then I can't do the integral with respect to x. So with that in mind, let's take a look at this figure. What I've drawn here is an obtuse triangle with r1 oriented along z. Remember we were going to do that. We were going to reorient the electrons every time we do the integral over the other variable so that the first one is along z. And because it's spherically symmetric, that doesn't change the answer at all in terms of the energy. And what I've done is I've added a little length to r1 so that by the time it gets to r2, it's a right triangle. And the little length I've added, I've called a. And the distance on the other side of the right triangle, I've called b. And the angle between r1 and r2, which is bigger than 90 degrees in this figure, I've called theta. And that means that the other angle to the other side of the line is pi minus theta or 180 degrees minus theta. In order to get an expression then for the distance r1,2, which is between the two electrons, there are two right triangles. There's one that involves the small figure, which is a squared plus b squared is r2 squared. And there's another one, which is a big triangle, which is r1 plus a quantity squared, always add them first and then square them. Plus b squared is equal to r1,2 squared. That's the big triangle. Expanding the second equation then and just writing it out, we get r1,2 squared is equal to a squared plus 2a r1 plus r1 squared plus b squared. A squared plus b squared, I can gather together. And by the other triangle, that's r2 squared. So r1,2 squared is equal to r2 squared. Good, that's a variable r2. Plus r1 squared, good, that's a variable r1 plus 2 times a times r1. No good, what's a? I have to know what a is to be able to integrate over the thing. But luckily, a by that triangle is equal to r1 times cosine of the angle that's nearest to a. And that angle is pi minus theta. And therefore, I could go back to Euler's identity and put an e to the i theta and figure it out. But I know in fact it just changes sign, so that's minus r2 cosine theta. And that's a, so I substitute that value of a in. And I get what's called the law of the cosines, which you could go back to the Pythagoreans or Euclid and find that they were smart enough to figure this out as well. r1,2 squared is equal to r2 squared plus r1 squared plus 2r1 r2 cosine theta. Theta is okay because remember in spherical coordinates, theta being the angle between the two vectors, that's perfect. Because if one of them is along z, then theta was the variable that I was integrating over for the other one. So now we're set to go and we can do the integral. In the case that theta is less than 90, so it's not an obtuse triangle, but it's an acute triangle. I'll let you draw the triangles. You draw them slightly differently. But you come to exactly the same conclusion, namely that this formula is always valid. And of course, if theta is equal to 90 degrees, so that it's just a right triangle, then the cosine theta is zero and that goes away very conveniently and then we just have the Pythagorean theorem. So the law of cosines is just a generalization of the Pythagorean theorem for the case where it's not exactly a right triangle. So now what we've done here in the bottom of slide 440 is we've put in 1 over r12 and because the first wave function that depends only on r1 has no dependence on r2, they're independent variables. I've factored it out. And I have a shorthand notation here that I'm using and that's just to try to fit the equations onto the slide basically. But I do a single integral d vector r2. What that really means is I'm going to integrate over phi. I'm going to integrate over theta and I'm going to integrate over the scalar r from zero to infinity. I'm going to do all those things when it comes right down to it. But just to keep track as a placeholder, I've got that integral. It's going to be a triple integral but I just write it as 1 to make the equation a little bit easier to see. But don't let that notation throw you off. We'll get to that. Now this still is not so easy because how do I do this integral here? I've got the square root of all this spinach in the denominator and it doesn't necessarily suggest the answer right away. Well let's put in the atomic wave function. That's 1 over root pi, 1 over a naught to the 3 halves times e to the minus r over a naught. And it's the same whether it's r1 or r2, it's just the coordinate of the electron but the wave function is the same, just has a different variable. And in atomic units it's 1 over root pi times e to the minus r. And therefore I'll leave the first, again you see why because I can barely fit the equation on the slide. I'm going to leave the first integration with respect to r1 as just a symbolic thing. And then the second integration with respect to r2, I have the integral over theta of sine theta because remember that was part of the volume element that I needed. And then in the bottom I have this 2r1, r2 cosine theta. Then I have the integral over phi, that doesn't bother me, integral over d phi 2, I don't see any dependence on phi. Then I have the integral over r and I have to remember to put in the r squared there. Okay, so if I make a substitution which you tend to do when you have trigonometric functions, if you have an algebraic function you can't do, you tend to make a trigonometric substitution. And if you have a trigonometric function you can't do, you tend to make an algebraic substitution. Here what I'm going to do is I'm going to let x, the variable x be cosine of theta 2. Then dx is equal to minus sine theta 2 d theta 2. And therefore the integral over d theta 2 of sine theta 2 over the radical is equal to minus the integral from 1 which is when cosine theta, when theta is equal to 0, cosine is 1 to minus 1 of dx over the square root of r1 squared plus r2 squared plus minus 2r1 r2. And then I can change the limits, make it from minus 1 to 1, get rid of the negative sign. And then that one I can look up the antiderivative because that one is a standard, very easy to do. And I'll let you verify it by taking this actual antiderivative I've given you here on the bottom of slide 442 and please differentiate it with respect to x and verify that you get the integrand that we started with. We get this minus the square root of r1 squared plus r2 squared minus 2r1 r2x divided by r1 r2. I think it's pretty easy to see where that came from once you start doing the derivative. If we put in the limits then and do the subtraction, we get the following r1 plus r2 minus the absolute value of r1 minus r2 divided by r1 r2. And recall that the square root of x squared is the absolute value of x because the square root is positive. Therefore our integral over theta is equal to this. So it's already kind of interesting because you may not have encountered this before. The integral over theta of that sine theta over the square root is equal to 2 over r1 if r1 is bigger than r2. So if r1 is bigger than r2 it's 2 over r1. But it's equal to 2 over r2 if r1 is less than r2. So it's equal to 2 over the bigger of the 2. And that means when I integrate over the other variable r1, I have to be very careful here that I pick the right answer when I'm integrating the right variable. If I fix r2 and I integrate r1 up to r2, I should use one formula. And then when I integrate r1 the rest of the way to infinity, I should use the other formula. And if I'm not careful about how I do that, then I get the wrong answer. Now you can imagine if you have a lot of electrons and you have a lot of integrals that keep doing this kind of thing that they depend on whose where, it can get mighty tricky to figure out how to keep track of the right way to do things. But so this might be your first introduction to this kind of straightforward problem that seems to give a conditional antiderivative that depends on what's going on. The integral over phi 2 gives 2 pi, big deal. Nothing to do there. The integral over theta 1, there's no theta 1 left, theta's gone, it was theta 2, there's no theta 1 left and phi 1, that just gives 4 pi like it always does because 4 pi is the angular extent of the spherical coordinates. And so there's nothing else there to do except the radial part. And we have to break the radial part into two integrals. We have to first break it into 1 over r1 times the integral of dr2 r2 squared e to the minus 2 r2. From 0 to r1 and then the rest of the way from r1 to infinity, it's just dr2 r2 because it was 1 over r2. So one of them went away e to the minus 2 r2. And both of these, so we have one that's r2 e to the minus 2 r2, we've got the other one r2 squared e to the minus 2 r2. Boy, do you get good at doing these. In fact, you get so good you just know them by heart if you start doing this kind of thing often because you don't want to waste time to flip through pages. It's like knowing somebody's phone number. If you call them up a lot, you just know it. And you will know this stuff if you work through these things and you will suddenly seem very smart to people who don't do these kind of calculations. Or very nerdy perhaps. Anyway, you can do these integrals by parts. And here's what we get. The integral of r squared e to the minus 2 r2 is equal to minus e to the minus 2 r2 over 4 times the polynomial in r2. 1 plus 2 r2 plus 2 r2 squared. And the other one has one less term and is in the middle of slide 445. Again, if you worry about whether these are correct or you just want to reassure yourself, take the derivative. And whenever you're by yourself and you don't have software and you're proposing an antiderivative, the antiderivative is always a bit harder. It's like dividing in a way. You have to kind of see if it's going to work. The derivative is more like multiplication. You can just do it. And so you can always go backwards and figure out. Now let's put in the limits. And we can finish off the integral over r2. And we put in these limits, 0 to r1 on the first integral. We get the expression at the top of slide 446. And r1 to infinity on the second integral. Well, at infinity the exponential vanishes, so we just get 0. And then we have a term that I've shown. And if you tidy everything up, you get the following. You get that this radial integral is equal to 1 fourth times 1 over r1 minus e to the minus 2 over r1 times 1 over r1 plus 1. It's a little bit messy but not too bad. The integral over r1 can also be done now by parts. So you've got the integral over r2 which was conditional. Now you've done that. The integral over theta1 and phi1 I said was 4 pi. The integral over r1. Now you've got this function of r1. You've got to do it by parts again. And that integral turns out to just be 5 over 128. And that's the way these things sometimes work. And that usually means you've done it right if it works like that. But what? Well, we get a factor of 16 because we got the 2, 4 pi's. But the pi squared goes away because of the 1 over the square root of pi in each 1s wave function. I got two of them but I got them on both sides. So I got pi squared. That goes away conveniently. And so we get that the energy correction by first order perturbation theory, E1 is equal to 5 eighths because of the factor of 16. The first reaction when you do this calculation is to curse and say 5 eighths of what? Well, it's 5 eighths of a heart treat because we were working in atomic units. And we know that that's the unit of energy in atomic units. So one way is to close your eyes and just say look, I didn't do anything wrong. I set up these units. I know this is energy. It's got to be a heart treat. If that doesn't reassure you, you can go back and put in all the constants now that you've done everything. Let them ride along, very messy, messy stuff. And you can see that it is 5 eighths of a heart treat. All the constants just ride along. Now the two electrons, each one interacting with the nucleus is minus half a heart treat because that's what heart treat is twice the ionization of hydrogen atom approximately. So that's minus a half, minus a half plus 5 eighths. And therefore I've added the total energy of the hydride anion now is the energy of the first electron, minus a half, plus the energy of the second electron, minus a half, plus 5 eighths, minus 3 eighths. Perfect, you might think. It's negative. So that means that H minus is stable compared to a proton and an electron and another electron at rest at infinity. Well, unfortunately, that is a pretty low bar to have to meet because that's not the question. The question really is, is hydride stable compared to a hydrogen atom which we know is stable and an electron at infinity? And that's a real sour ending because the energy of the hydride is minus 3 eighths of a heart treat. And the energy of a hydrogen atom plus an electron is minus a half and therefore what we're predicting is if we have a hydrogen atom and we have an electron and we bring it up that the energy becomes more unstable. In other words, it just kicks it back out. It ionizes it back out, kicks it out and the repulsion force wins. And that would mean that hydride wouldn't exist. And if hydride didn't exist, we wouldn't have a name for it. Well, maybe I shouldn't say that. We have names for plenty of things that don't exist except in our heads. But hydride is a real thing that we can see. It has a very big radius. The radius of the hydride ion is bigger than the fluoride anion. So it's very puffed up but it does exist. And unfortunately what this means is it could mean two things. It could mean that quantum mechanics is a crock and it doesn't work and this is the proof. Or it could mean that perturbation theory to first order is not good enough to give us the correct answer. And in fact, in this case, it's the second thing. That means that we've got to somehow work harder in order to figure out a better wave function or a better way to calculate the energy that we know is more accurate than what we've done here. After all that work, then we can't reproduce the simple fact that H minus exists. It has a well-known radius. It has a positive ionization energy. Well, we should have anticipated that in retrospect. And here's why. Our perturbation has exactly the same size as the two attractive parts. Recall that the idea behind perturbation theory was that you had a large problem that was simple that you had solved. And then you were adding kind of a small, complicated part that you couldn't solve exactly. But because it was small, you could expand it and you could close in on the answer as a power series. And you could decide when to stop. But if we look at this, then suppose we propose there's a parameter X that indicates the size of the various energies. And the ratio of X is sort of the ratio of the repulsion energy to the attraction energy for any particular electron. Then if we're trying to get a solution in X and X is near to be 1, then X doesn't get small very quickly. And so just assuming that e to the X, for example, is 1 plus X, if X is not small, is a very bad approximation. Especially if X is near 1, you're comparing, you know, 2 and it's way off. And if X in the perturbation series happens to be bigger than unity, then what that means is as you calculate successive corrections, they might get more and more violent. So you might say, well, first the energy should go down like this and then it should go up like that and it should go down like this. And you might have some of terms that just doesn't add up to anything, just diverges. And furthermore, it becomes very inaccurate. After you do all this tons of work, you still get garbage out at the end. So we shouldn't be so surprised that this is not so easy to do. And whenever you look at a problem like this, it's a good idea to try to estimate how big these energies are likely to be and get an idea whether perturbation theory may or may not work. Here, in retrospect, we didn't expect it to work well, but of course, if you go through the calculation, you do all those integrals, you do all that work and it doesn't work, then you really scratch your head and you remember that a long time. How can we improve them? What's wrong? Well, using the hydrogen 1S orbitals is not a good idea because the hydrogen 1S orbitals have their maximum probability of finding the electron in a shell at the Bohr radius. But we know experimentally the hydride ion has a much bigger radius than the hydrogen atom. And therefore, we would be much better because of the electron-electron repulsion, among other things, that puffs it up. We would expect that it might be too tight a squeeze to just sit there with the hydrogen wave functions and just calculate a correction. Now, we could go further in perturbation theory and calculate the correction to the hydrogen wave function based on perturbation theory and what we would find is that it would get bigger when we corrected it. But that would be a lot of integrals to do, not a few, a lot and it would take a long, long time. Even if we had software helping us out along the way, we still have to keep track of things very accurately and add them all up. So I want to try a slightly different approach. We would use perturbation theory to correct the wave function. But why don't we correct the wave function with physical intuition? Why don't we introduce an artificial parameter into the orbitals that controls their radius? Before we had the wave function as e to the minus r1, for example. Now, let's put something else in there. So let's put an e to the minus zeta r1. And now zeta is something we can control. It's a dial. We can dial it in and out. And we can compute the energy of the hydride anion as a function of this parameter zeta. And then we can find the optimum. It won't necessarily guarantee success. But you learn a lot by failing in this kind of endeavor. It's very important never to look at the answer before you've really tried like crazy. Because if you do, you miss everything about it. It's like somebody giving the answer to cross words or anything else telling you in a Sudoku to put a 3 there. It completely ruins it in a way. And you never really learn it. All right. So let's keep in mind that our evaluation of the energy, although we did it by perturbation theory there, it's in fact exact because we had the hydrogen atom energy. That's exact minus 1 half hard tree. And we did the integral over R12 exactly. We didn't make any approximations there. So that's the exact energy for that wave function. In order to get a better estimate, we have to correct the wave function. And we're just going to correct the wave function now. And hopefully, if we just tweak the wave function slightly but don't change it in any essential way, the math won't get too difficult. We can still use all our other results and do our integrals and so forth. And then see what we get. So we'll recycle most of our work. I'll refer back to those integrals if you want to look back at how to do them. And just slightly adjust the most probable radius. And what we expect physically is that it should get more stable if we make it a little bit bigger. Now, if we make it too big, it'll be very bad. Because if it's very far apart, then the energy is zero because it's like the proton and two electrons at rest at infinity. So as I said, rather than doing perturbation theory to the next order, which would be interesting, but it'll take a long time, we're just going to guess a way to adjust the wave function. And then we're going to try this variational approach. We have energy that we've calculated that depends on a parameter. And then we know if the energy goes lower that it's a closer approximation to the truth. So we minimize the energy with respect to the parameter. So we introduce a parameter that I'm calling zeta, which looks like a Greek squiggle. A lot of people don't even know what the letter is, but now you're part of the club. There's two of them that are squiggles. They're zeta and they're xi. And don't get them mixed up. Zeta has less squiggles. And so let's put in our new normalized guess. Here it is at the bottom of slide 454. The wave function, which is a function of r1 and r2, psi of r1, r2 is equal to zeta cubed divided by pi times e to the minus zeta r1 times e to the minus zeta r2. And when zeta is 1, we get exactly the same thing that we had before. And the question is, is zeta equal to 1 the best guess? Well, we can't do worse by introducing this. It may not improve, but we can't do worse because we can always pick zeta as equal to 1 and we got what we got before. Now we have to calculate the expectation value of the energy with this wave function. And that means we have to do the two hydrogen-like terms, each electron interacting with the proton. And then we have to do the darn repulsion integral again. Oh, here we have our three Hamiltonians, h1, h2, in atomic units, and h12, I'll call, which is the interaction term that mucks it up, prevents the energy from just being the sum. And we have three integrals to do, but thank goodness, two of them are identical just with electron 1 and electron 2. There's no point doing that one over. And here's what we get. The energy, which is equal to the double integral, again, I've used the shorthand here of integral d vector r1, d vector r2. Of psi, h, psi, excuse me, psi star, h, psi. But in this case, the wave function's real, so it does not matter whether we have the complex conjugate or not. And that breaks into three terms. The integral with h1 sandwiched in between, the integral with h2 sandwiched in between, whatever they are, they're the same. No difference between 1 and 2. And then the repulsive integral with h12, which is just 1 over r12 sandwiched in between, which we saw how to do. And therefore, it's not going to be too bad because the only real difference is this thing zeta in the exponent, and that doesn't change in any fundamental way whether we can do the integral or not. It's not like we suddenly switched it to r squared in the exponent or something that might make it quite difficult or change or r cubed and make it difficult to do the integral. The first two terms are the same, and the angular parts integrate to 4 pi in each case because there's no angular dependence in the wave function. And therefore, all we have to do to do either of the first two terms is do the angular, excuse me, the radial part. And I've written it out in full here for the variable r1 at the bottom of slide 457. We have r1 squared, that comes in from the volume element. We've got zeta cubed over pi, then with the first wave function. Then we've got the Hamiltonian. Then we've got the second wave function. And the 4 pi comes out on the angular variables. So let's then take a closer look at how to do this. Well, we've got the kinetic energy part. Now you might say, well, why don't you just kind of divine, this is so similar to the 1s orbital of hydrogen. Why don't you just divine where to put zeta into the answer? And the answer is I don't trust myself to be able to get that right. So I'm going to go back and put in what the kinetic energy is. And I've written in atomic units here, it's minus 1 half del squared. And I know how to write that out in spherical polar coordinates. And the wave function has no dependence on phi or theta. And therefore, the part that I end up with is the second derivative with respect to r1 squared minus 1 over r1 times the first derivative with respect to r1. And I have to keep in mind that I have the factor of minus 1 half, which I've included there. Therefore, here's what we've got. Now this is pretty messy because I'm integrating with respect to r1. There's the wave function. And then there's this thing that's taking a lot of derivatives with respect to r1. And then there's the potential energy, which is minus 1 over r1, that I've put in there. So we've got all those three things in there. And then they're operating on the wave function. So I've got to go step by step. I've got to take the derivatives, write down everything, put them there, write them there, and then I'm going to have to integrate by parts. Anything that doesn't depend on r1 can be pulled out in front. So I've pulled out the two wave func-, the two exponentials that depend on r2. And now I've got this mess to do with r1. But the derivatives are pretty easy. All they do is bring down a zeta each time and change the sign depending on how many derivatives I took. So I end up with this little thing here to integrate r1 squared e to the minus zeta r1 times minus zeta squared over 2 plus 1 over r1 times zeta minus 1. And then there's another e to the minus zeta r1. And the integrals are standard, by which I mean, while you just look them up, r to the n e to the minus r. And if you go ahead and do the integral and do it out, you get a result that looks pretty nice. It's 1 over 8 zeta minus 1 over 4 zeta squared. And that's then our result. If you integrate over r2, then the leading constants out front, including the leading constants out front, excuse me. And you do the whole thing then. You get the following result for the single electron energy, the expectation value of just the Hamiltonian h1 is zeta over 2 times zeta minus 2. And when we put zeta equals 1, we get minus a half. And so I wouldn't have been able to figure out that it had this functional form without a very quiet room, without actually doing these integrals. And that's why I don't just put in something like, well, it's minus zeta over 2 because that's not correct. It might be that. That also gives a half when zeta is 1, but it might not be. And in fact, it isn't. It's, and there's good reasons why it has to have a linear quadratic term. The electron repulsion integral is the same thing. And I'm going to let you do that because that one we did in great detail with the sine theta and the conditional 2 over r1. And if you want to do that and you go through and you do it with a big piece of paper and a quiet room, you will get 5 zeta divided by 8 as the answer for the electron repulsion. And that'll be quite a lot of paper to hand in when you do it as one of our problems. Now, when, whenever you get something, you should check whether it makes sense. Sometimes things don't seem to make sense, like the electron going through both slits. But in that case, you keep doing the experiment over and over. In this case, you should expect a calculation to tell you that the thing got bigger. And when we look at this formula, 5 zeta over 8, if zeta is reduced, that means since if zeta is reduced, the thing gets larger, then the repulsion is lower. And that makes sense. That's exactly what we expect to happen. The total expectation value of the energy then as a function of zeta is the sum of these three terms. And finally, we get this formula. When everything comes out in the wash, we add everything up and we do the algebra very carefully and don't make any errors, we get zeta squared minus 11 zeta over 8. That is the zeta dependent energy. And when zeta is equal to 1, we get our prior value of minus 3 1⁄8 of a hard tree. Now, however, we have this energy as a function of zeta. We have the variational principle that says when the energy goes lower, you did better. And this kind of function with a quadratic and a linear term with different signs clearly has a minimum. And so we can optimize this by the variational principle and get a much better estimate of the energy of the hydride anion. Let's do this then as a practice problem. This is practice problem 24. Let's optimize the value of zeta to minimize the energy. Is the hydride ion predicted to be stable? Here's the answer. The simple problem in calculus. What do we do? If there's a minimum, that means that's the lowest it was. The slope is zero there. And so while saying the function has a minimum is kind of a vague statement in a way, saying that the derivative is equal to zero is something you can actually work with. It's an actual equation that gives you a solution. And so that's of course what you translate it to mean. And therefore we take the derivative of that function and we get, so we have zeta squared minus 11 eighths zeta. And if we take the derivative, we get 2 zeta minus 11 eighths. And if we set that equal to zero, then zeta should be 11 sixteenths. Now again, we say does that make sense? And the answer is yes, because 16 sixteenths was one. And what we did is we puffed it out. We let it go out quite a bit more. And that lowered the energy because of the repulsion term going away faster than the attraction terms did. And now the question is, is the hydride ion stable? By which I mean, is the energy of the darn hydride ion less than half a heart tree, which is the energy of the hydrogen atom? And an electron just hanging out in the wind? Well let's put in the optimum value of zeta. The energy minimum then as a function of zeta is 11 over 16 squared minus 11 over 8 times 11 over 16. And that becomes minus 121 over 256. So there's no joy in mudville here because to be stable, we would have to get an energy lower than minus 128 over 256. We're at minus 121, or much better than the 3 eighths before that we had. That was pretty bad. This is better for sure. And that shows that we're closing in on the correct way of looking at the problem. But unfortunately, it's not good enough. It looks like hydride then is a tough nut to crack. And indeed it is. We're going to crack it, but we're going to have to go into the kitchen, and we're going to have to get some tools out. And you know, some nuts are Brazil nuts and other nuts are peanuts. You can open them with your fingers. This is definitely a macadamia or a Brazil nut and it's going to be tough to crack. It's not us. We didn't do anything wrong. We've taken a perfectly sensible approach. It's just that this is a tricky system. And it's interesting that it's so simple and yet it's so hard at the same time to get the right answer. But for some reassurance that it's not the quantum mechanics. That it's not completely wrong or something crazy. Let's try helium. Why would helium be better? Well, helium has a plus 2 charge for the nucleus. So that's going to attract the electrons much more strongly than the wimpy single charge of the proton. And that might mean that it's much easier to get it to be stable and actually get it reasonable. And it also teaches you why you love atomic units. So here I've written the Hamiltonian in atomic units for helium. The kinetic energy is exactly the same minus a half del squared. And then the only thing that changes is we've got minus 2 over R1, minus 2 over R2, and then plus 1 over R12. So that repulsion integral is the same. That's done. And the only other thing is the other things have a 2. And so that's easy to do. So that we can scale. The kinetic and potential energies are going to be both doubled for each electron. And the repulsion is going to be increased as well because the orbitals are going to get pulled in toward the nucleus. And what we get when we work it out is the, instead of getting minus a half, minus a half, we get minus 2 squared for the zero-thorner energy. And instead of just getting 5 8's, we get 5 8's times 2. So therefore the total energy of the helium atom, if you follow through the exact same math as what we did with the hydride anion, and you just have this 2 instead of a 1 there everywhere through, you can do it very quickly if you just keep track of it. You get minus 11 fourths. That means that the atom is stable already by perturbation theory compared to a helium ion and an electron at infinity at rest. And the experimental energy that's listed in the literature in Hartree's is minus 2.9033. And recall I said if you quote things in Hartree's, you never have to re-quote them because you don't depend on what the units are. It just depends on what the calculation is. It doesn't depend on the fundamental constants. And our calculated value minus 11 fourths is minus 2.75 Hartree's. Here's the real good helium experimentally down here. Here's the best we could do before we just by perturbation theory. So at least we got that it was stable. It wasn't unstable, but it's quite a bit of a difference between the correct answer. Five percent error in these kinds of things is way, way, way, way, way off unfortunately. That's like you didn't even bomb the right city. You didn't even bomb the right country. You're on a different planet in terms of your target. And therefore we can't do that. But why don't we just do the same thing as what we did with hydride, with helium? Since it's already better, at least it's predicted to be stable, let's puff the orbitals up a little bit with zeta and take the minimum and follow through the same math again with the 2 instead of the 1. And we get a similar expression to be what we had before, but now e of zeta is zeta squared minus 27 over 8 zeta. And the same technique, obviously, you get 27 over 16 for zeta being the optimum value. And then if you put that in to the minimum, what you get is about, you get minus 27 over 16 squared. That's exactly what you get. And what that approximately becomes is minus 2.84766 hard tree. That is looking pretty good because the real one is minus 2.9. But the real question is, well, how good are our results in terms of something that a chemist might be interested in? Could we calculate based with this kind of approach, could we calculate something accurately enough that somebody would really invest money in some scheme based on that calculation? And the answer is no. The energy of the helium ion is minus 2 hard tree. Because it's just like a hydrogen atom with a different charge. So that is known. And our ionization energy then, if we're minus 2.846, our ionization energy, the energy to boot one electron off is 0.84766. The true value is 0.9037 to boot an electron off helium. When we convert that from hard tree to kilojoules per mole, which are more familiar units to a chemist, we find that we're in error by about 147 kilojoules per mole. That is similar to the value of many chemical bonds. Many chemical bonds have a value, it could be a weak bond, but that's a lot of energy as far as a chemist is concerned. That's not a tiny error. That's a huge error. And so we're nowhere close even doing this variational calculation, inserting this parameter zeta, doing all this work, doing all these integrals, taking the derivative, setting it to 0, optimizing it, putting it in, hold your breath, calculate it, and we're still nowhere close to what we would call chemical accuracy. I think you can see why this kind of field in atomic physics and quantum chemistry gets called computational chemistry very quickly. Because you may have to introduce functions that aren't so very easy to integrate, but happen to be very close to the true wave function. And maybe part of our trouble is we're introducing exponentials because we know how to integrate them. We have a closed form for the antiderivative. But maybe once we start getting more than one electron in there, they aren't so close to the correct result. Next time then what we're going to look at is can we, with a little bit more physical insight, so we said, well, it puffed out, but could we do better than that? Could we somehow change the wave function in such a way that it's not so bad to integrate, but that it's much, much better in terms of results? And that will be what I call hydride tri number 3. We're either going to strike out or we're going to at least get a single, and hopefully we can figure out that hydride is in fact stable. So we'll pick it up there next time.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description:This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:00:53 Where We Left Off 0:02:06 Figuring r12 0:03:55 Law of Cosines 0:07:58 First-Order Correction 0:09:11 The Theta Integral 0:17:42 What's the Energy? 0:19:56 Stability of Hydride 0:24:37 Improving our Estimate 0:30:19 Evaluating the Energy 0:37:09 The Repulsion Term 0:43:59 The Helium Atom
10.5446/18895 (DOI)
Welcome back to Chemistry 131A. Today what we're going to do is pick up where we left off last time and we're going to expand our vision. We're going to talk about approximation methods, including the variational principle. We're going to introduce atomic units because when we do calculations in quantum chemistry, carrying around big and small numbers turns into a big problem. And we're going to prepare the ground to study two electron systems. We can't hope to get the exact solution for these two electron systems, but we can get as close a solution as we want to, and that's usually good enough for chemical accuracy. We saw from the last lecture then that two electrons in the same spatial orbital would have to have their spins paired. In other words, the overall wave function has to be anti-symmetric so that if the spatial part is symmetric, the spin part has to be the anti-symmetric singlet state. 50-50 up-down, minus-down-up was the state that came. Now when we look at two electron systems, hydride might be the simplest one. We'll start with that and we'll see that although, conceptually, it's very simple. In fact, computationally, it's extremely difficult. And helium is very similar to hydride except there's no negative charge, but there's two electrons and now instead of a plus one charge in the nucleus, there's a plus two charge and that will make all the difference in terms of how easy it is to calculate the properties of helium. We can just assume anyway that the electrons in these systems are in the singlet state and we're going to forget that for the time being and just focus on the spatial part. We want to figure out what's the energy of this atom. Can we figure out its ionization energy? Can we figure out its properties? And if so, how accurately can we get our results to compare with experiment? So the idea behind what I'm going to introduce now, which is called the variational principle, harks back to our observation earlier on that eigenfunctions for different eigenvalues of a linear Hermitian operator are orthogonal. We can think of them like vectors in a space. They are all at right angles to each other and that means that we can take any function and decompose it into eigenfunctions of some linear Hermitian operator and what we will be trying to do is to decompose it into eigenfunctions of an energy operator. So I've written here what orthogonality means for two functions phi n and phi m. It means if we take the integral, which is the continuous analog of the dot product, if we integrate every point in space up against each other, we get zero and that literally means these two functions have nothing to do with each other. They're in different directions. And of course, we can always normalize any function. We did that early on. We just take the raw function, square it, integrate it and see what we get and then we divide by the square root of whatever we get to make sure that it comes out to be one. That means that when we do that, what we're ensuring is that we have these vectors in different directions, but we wouldn't want to be measuring the x direction in meters and the y direction in miles. We want to make sure that the units of our measurement, the amounts of each function that we're going to mix in to this recipe to get our final function are in the same units and then normalizing them makes sure that that happens. And the orthogonality, just like we can, if we have a two-dimensional surface, we can specify any coordinate on the surface as an ordered pair x comma y. X tells us how far to go on the x-axis and y. On the y-axis, we can specify any function, any unknown function as a linear combination of these basis functions or these eigenfunctions. And in fact, here I've drawn on slide 417 a picture, which is kind of the analog of a coordinate picture for points, but here I have a function f. I visualize f as just an arrow pointing somewhere. Nothing more than that. And then I can visualize my basis functions as arrows pointing along the coordinate axes in this funny function space. And in this one, there's a red part phi 1 to f, and there's a blue part phi 2. And in general, there could be an infinite number of parts, but usually we won't have to go that far to get a decent approximation to the function that we're trying to get. How can we calculate the amounts? Well, we calculate the amounts just by taking the integral. So by the same token that we calculated the amount of phi n in phi n, that was 1. It's 100 percent. And the amount of phi n and phi m, that's 0 percent. If we have a function f here that I've written, and we integrate it with each basis function in turn, phi 1, phi 2, so forth and so on, we get a series of numbers. The numbers are the result of the integral. And the number that we get by doing that integral is the amount of that function that is present in the unknown function f. So all we have to be able to do to figure out how much of our basis function is in an unknown function is do an integral. And while integrals can be intimidating to beginners, integrals are considered to be easy to do one way or another, numerically or analytically. And so that's very good that we have a closed solution to calculate these coefficients. We don't have to try to guess them or something crazy and then see how close we get to f. We have a way to systematically chip away and get our unknown function as accurately as we want. And sometimes we have to do the integrals numerically and in that case we use a very powerful computer and we set small step sizes and we do the integrals. The idea then that some unknown function that we're trying to find can be represented as a linear combination of eigenfunctions leads us on to this very powerful and general method to find approximate solutions to difficult problems. And that method is called the variational principle. The variational principle is just going to be an inequality but an inequality is very important when you're trying to figure out which way to go. Even as a kid when you're playing a game where you're blindfolded people say you're getting warmer, you're getting colder. If they never said anything about where you were going, you could never find whatever it is that you were looking for. And the computer is not too smart and we aren't too smart either and we need a criterion. We need some way to figure out if this new thing is better or worse than the last one. And if so, how much better it is. And this variational principle is going to give us this machinery to do that. We can suppose always, even for an unknown problem that we haven't solved, we can suppose that there does exist a set of eigenstates, energy eigenstates in particular for the problem we are trying to solve. We haven't yet solved it but we suppose that if we could solve it that these states would exist. And we know that these states are orthogonal because we proved that generally for a linear Hermitian operator which the Hamiltonian, the energy operator is, it's a linear Hermitian operator. Then based on our orthogonal set of states, we can write some wave function psi as a linear combination of these unknown states. Now this might seem to be the vaguest equation ever proposed. I have an unknown thing that I've got here psi of r and then I've got some unknown coefficients and then I've got some unknown solutions to a problem that I'm trying to solve. But nevertheless, this is an exact relationship between this and the method to calculate the coefficients is to do the integral of these functions. We don't know the coefficient Cn but we can assume that they exist and they are complex numbers, this being quantum mechanics. And they can be calculated by doing integrals. So we're trying to find the, let's say, the ground state wave function, the lowest energy, the most stable state of an atom or some other system. And we don't know what it is but we make a guess. We make a guess based on a similar system or by analogy or we just find a function that looks like it might be a good wave function and we take that as a guess. And a very good guess to make would be a function that would be easy to integrate because when we want to actually figure out what's going on in a real calculation we may have to do integrals. So picking a function that's very, very hard to integrate and takes a long time or is tricky is not usually a good choice. The condition then that our unknown guess wave function be normalized, I've written here on the bottom of slide 420 in this rather long equation. Basically we take the integral of psi star psi and then we expand each one for psi star we put Cn star, phi n star because they could both be complex numbers. And then for the other psi, not star, we put the sum over m of Cm phi m. The important thing here to do is to always when you introduce a sum and you're breaking up an unknown vector into its components use a different letter. Don't use n and then use n again. If you do that and that happens when you're first starting out, you run into a terrible mess because then you're inadvertently coupling the coefficients together and they have nothing to do with each other. So we just want to keep them all separate. But now, so we've got some unknown. Let's say we've got 16 functions in phi, C1 through C16. And then another C1 through C16. They all go away though because the eigenfunctions are orthogonal. So whenever the two functions aren't the same, even though we don't know what they are, the term vanishes. So we don't have anything except the terms where n is equal to m. And we end up then in the second line with the sum over n of Cn squared phi star n phi n. And that because the phi's are normalized, we always assume that the eigenstates are normalized, if not we normalize them. Just ends up to be that the sum of the square of the coefficients is equal to 1. So that the coefficients themselves, when you take each one and you take its length and you add them all up, it adds up to 1. That's the condition. We don't know the explicit form of these energy functions phi n. And we don't know the energies either. But we can order them. We're trying to get the lowest energy. Let's call the lowest energy E0. And let's suppose that there are other energies E1, E2, and so forth that are higher. We don't know where they are. We don't know what they are. But that's unimportant for this discussion. We guess a trial wave function psi. We mentally decompose it into a linear combination of these eigenfunctions with certain coefficients that are complex numbers. And then using this we can get a very, very useful inequality to do with the energy, which will help us to systematically improve our initial guess. Here's what we do. We follow through the same principle that we did to show that the sum of the coefficients has to be 1. But now we put the energy operator in the integral. So we're calculating the expectation value of the energy in the state psi. Psi star h psi. We integrate that. Well, we don't know how to do that integral. We know the Hamiltonian for our unknown system. That's for sure. Otherwise we would be nowhere. We have to know what forces and energies are at play. Knowing those doesn't give us the answer unfortunately. And then we take our unknown function. And again, we know that we can always decompose it into this linear combination just like any point on the x, y plane has to have an x coordinate and a y coordinate. And any point in 3D space has to have an x, y and z coordinate. So we can decompose it. And I've written that out here. Again, one sum is over n, then h, then another sum is over n. And now we can use the fact that we know these are energy eigenstates, these functions phi, to put h on each of the ones that is marked phi m and we get em. And then we can use the fact that all the functions are orthogonal to realize that when we have the other sum with all those other things that only the term where n is equal to m is going to come through. And in that case, we have the square of the coefficient which is a real positive number. And then we have the wave function phi n, en phi n, which integrates to the sum over n of en times cn squared. En is always greater than or equal to enot because we ordered the energies. And so if there's only one term enot, then it's equal to enot. But if there's any other amount, if there's 50, 50 of zero energy and one, then that's higher than 100% of zero energy. And because these numbers cn squared are real positive numbers, there can't be any cancellation. And that means that the sum of en cn squared, which is the expectation value of the energy for our gas, is always bigger than or equal to the sum over n of enot. Instead of putting in n, I put in enot for each term. And then I can pull the enot out of the sum because it doesn't have an index that depends on n. And then before, I showed you that the sum of cn squared is 1. So that means that the expectation value of the energy for our gas always has to be greater than or equal to the ground state energy enot. We guess something, we calculate the energy, and then we guess again, or we adjust something, or we minimize something by calculus because we can find the minimum of many things by finding where the derivative is zero. And we go downhill and we know automatically that if our energy lowers, that we're going to be improving our estimate of the ground state of whatever quantum system it is that we're looking at. That's the variational principle. And it's really one of the fundamental and most powerful tools that we have to find approximate solutions for these complex systems because we can minimize things in many ways. We can use a computer. We can use calculus and so on. There are many tricks. Minimizing functions and so forth is a well-studied area. So if we can take this problem in quantum mechanics of figuring out this wave function and cast it into a minimization problem that mathematicians have studied for ages, then we made a lot of progress. So this was a very, very important result. Now, it may seem like this is a bit empty because we broke our function up into these eigenfunctions, but we can't actually write down the eigenfunctions because if we could write down the eigenfunctions, we wouldn't be doing any of this. We just write down the ground state eigenfunction and we would know by quantum mechanics everything that it's possible to know about the ground state of the atom. And so the problem would already be solved. But that's not the main thing because we don't need to know what they are. All we need to do is calculate the energy and we don't need to know them to calculate the energy because we know the energy operator. It's got some derivatives in it and it's got 1 over r and some charges and other things in it. And we can calculate for our function that we guess exactly what the energy is by just doing integrals. We don't need to actually try to break it apart into these unknown functions. That's just a mental exercise to show that when the energy gets lower, it gets better. We calculate the energy. We tweak our wave function. If the energy goes lower, it's better automatically and that means that we're closer to the correct solution. Obviously, we have to make a very good guess and whatever we add that we're twisting or adding things in, some things we add in may not be very important. So we work like crazy and do all these integrals and the energy goes down a little bit. That's disappointing. But if we find the right thing to add to make the energy really drop down very sharply, down into very close to the minimum, then we're pretty sure that we've got a very good description of what's going on. And when we see what it took to make the system better, we start to get some physical insight into what's going on or how we should think of where the electrons are and how they may be interacting with each other. At some point, we're just going to hit a lower limit or we're going to get tired of doing so many integrals and we might give up and we might have to do a lot more computational effort, much more than we want to bother with to get a more accurate result. It all depends on the problem and whether you're trying to set a benchmark for accuracy or really deeply look into the theory or whether you're trying to figure out whether cis or trans isomer of something might be more stable or whether this confirm or maybe more stable than another one. It just all depends on the problem. But whatever it is, once the energy matches the ground state energy which we can usually measure experimentally, then we know that whatever wave function we have is a pretty good approximation to the ground state wave function, psi naught, and then we can look at the wave function and it'll tell us the expectation value of many other things that don't have anything to do with the energy but have to do with other things we might be interested in measuring. And then of course we can always compare with experiment. Ionization energies of atoms and such systems can be measured quite well and have been measured and so we can compare the ionization energy of helium for example to make the helium plus 1 ion and kick an electron off. With what we calculate the ionization energy should be which we can calculate by calculating the energy of the helium atom and then the energy of the helium ion which is like a hydrogen atom because now there's one electron so we can just use a formula for that. And then an electron at infinity at rest which has zero energy because it has no potential and no kinetic energy. Let's try some examples. But first before we undertake any of these calculations we're going to need atomic units. If we carry around MKS or SI units in these calculations with all the constants and so forth it'll get extremely tedious to keep track of everything. So let's see. So let's have a look then at how we could minimize this effort. So to streamline the equations atomic physicists adopted units of measure such that all the constants in front like E squared and M sub E and H bar and all those things that appear in all these equations did not involve larger small numbers. If you take H bar which is 10 to the minus 34 and you cube it on a digital computer it underflows. It makes a number so small in most languages that you just get zero. And then later if you divide by it you divide by zero and you have a big problem. And therefore we don't want to have larger small numbers like the electron charge raised to the fourth power. These can quickly get out of hand and you're adding and subtracting and multiplying and dividing small and large numbers. So for accuracy as well it's much better to keep everything near one. That way you have lots of headroom, lots of floor room on bigger and smaller numbers and your calculation usually stays accurate. And these units are called Hartree atomic units and they're named after the British physicist and applied mathematician Douglas Hartree who was instrumental in proposing many of these methods including ways to calculate electronic structure of atoms. Here's what we do then on slide 426. We agreed to measure mass, charge, energy and so forth in units such that the mass of the electron is one unit. The charge of the electron is one unit. H bar is one unit and 1 over 4 pi epsilon naught which comes in all the time whenever you have charges interacting is one unit. And if we do that then all the units disappear and we've got much simpler equations. Of course although they've disappeared the units are still there and they're riding along and we have to put them in at the end if we want to go back to MKS units. The thing is sometimes we just don't want to go back. We'd rather express everything in these units in Hartree's and the unit of energy is called the Hartree, E sub H here I've written is the mass of the electron times the charge of the electron to the fourth power divided by 4 pi epsilon naught H bar quantity squared. That's the Hartree that's one unit of energy. We can convert as I said once we're done we can convert to more conventional units. In chemistry we might want to convert to electron volts in chemistry or atomic physics. And one Hartree is 27.211385 electron volts or about twice the ionization energy of a hydrogen atom. And that's equivalent to 219,470 wave numbers if we're doing spectroscopy. So it's quite a very big unit of energy. Or if we're talking to an organic chemist and they're talking in terms of kilojoules per mole which is the common with thermochemistry and bond stability then one Hartree is 2,625 and a half kilojoules per mole. So again it's a very big unit. The strongest chemical bond I believe is carbon monoxide and that's about a thousand kilojoules per mole. So one Hartree is much stronger than a typical chemical bond in terms of an energy unit. There's another more subtle reason to use atomic units and that's one that you probably don't ever think of unless you're doing very, very, very accurate calculations and you're trying to compare your calculation with benchmarks that people have said in the past for various numbers. And the problem that you run into is that if you're making a calculation out to 8 or 9 digits of accuracy and you're introducing small things and you're arguing about small energy terms and this Hamiltonian is this important or not and exactly how well can we calculate these things. So you're pushing the frontier. The problem is if you quote your energy in electron volts or some other form then the value depends on what the value of the fundamental constants was when you were writing your paper because you insert them, H bar, the speed of light, the charge of the electron. Now normally we don't question those because we look them up in the NIST database or something like that and we take the most accurate value. But all those constants are subject to change. In other words, they're slight variables. Why? Because somebody comes along with a more clever experiment to actually get the accuracy better than you could do before. And that would be very bad if a constant changed and then I went back to a paper in 1965 and in addition to looking at the energy I'd have to also decide, well what was the value of H bar back in 1965 because they hadn't determined it quite as accurately as today and so forth. But if I work in hard trees I don't have to do the calculation over because we aren't using any of those. They're all one. So whatever they are you quote the value in hard trees only today when you want to convert it do you use the best possible values of all the constants. And that makes comparison with previous calculations much, much easier to do. That's a hidden benefit then of using these atomic units. So let's go on. Okay. Let's try a practice problem here. Excuse me on slide 429. Let's do practice problem 23. Let's write down the Hamiltonian for the hydride anion and for the helium atom in conventional units and also in atomic units. This will be our lead in then to actually using these Hamiltonians to do some calculations to figure out the wave function for these atoms. So here's the answer. We have what? The kinetic energy of two electrons. Their potential energy of attraction to the nucleus. They're both attracted to the positive nucleus. And then we have the electron electron repulsion term. And remember we've always factored out the center of mass. And then we pretend that the nucleus is fixed in space and so that these coordinates are just to the electrons themselves. For hydride in conventional units, we have minus h bar squared over 2m sub e del 1 squared minus h bar squared over 2m sub e, same mass of the electron, del 2 squared. Why the 1 and the 2? Well, each electron has some coordinates x1, y1, z1, x2, y2, z2 and the wave function will depend on these coordinates. The first one means, look, when you see x1, y1, or z1, take the second derivative and that's going to figure out what the kinetic energy of electron 1 is doing. When you get to number 2, look at only x2, y2, and z2. Treat the others as constant. That's going to isolate the kinetic energy of the second electron. So it looks a bit funny to have this subscript 1 and 2 on the del, but it's perfectly natural. And then we have the attraction minus e squared over 4 pi epsilon naught r1. r1 is the distance of electron 1 to the nucleus. Same thing for r2. And then we have plus, because these repel each other, e squared over 4 pi epsilon naught r12. r12 is the distance between the two electrons. And we'll get to that in a minute. In atomic units, it's much cleaner because getting rid of the m sub e and the h bar and so forth, now my Hamiltonian is this. Minus one half del 1 squared. Minus one half del 2 squared. Minus 1 over r1. Minus 1 over r2. Plus 1 over r1 r2. That is a nice equation that is easy to deal with, both by pen and paper and by a digital computer. For helium in conventional units, we have the same thing exactly as hydride except what? Well, the attraction to the nucleus, the nucleus has charge 2, so we have 2e times e. So we have minus 2e squared over 4 pi epsilon naught r1. Same thing for r2. And in atomic units, the Hamiltonian is simply, again, minus 1 half del 1 squared minus 1 half del 2 squared minus 2 over r1 minus 2 over r2 plus 1, again, the two electrons, over r12. So it's just much, much, much cleaner and easier. Now, it's, as I said, it's, you have to be clear about the notation. And here what I've written is our coordinate system on the bottom of slide 431. We think of these things as vectors r1 to one electron, r2 to another, and then r12 is the vector between the tip of r2 and the tip of r1. And it's a vector with a direction between the two electrons. And with respect to the figure, little r1 without bold phase. So if I use bold phase, I'm talking about a vector, a magnitude with direction and quantity. And if I'm using just a regular italic type phase, I'm talking about the length of the vector or just the distance between the nucleus and the electron in question. So r1's the length of the vector, bold r1, r2 is the length of the vector, bold r2. And r12 is the length of the vector, bold r12, which is just the vector r1 minus the vector r2. I know it has to be that because I know when I add vectors, I put them head to tail. And when I took r2 and I put the vector head to tail r1, what I called r12, I got r1. And that means that r12 plus r2 should be equal to r1. And that means that r12 should be r1 minus r2. So when I add r2 to it, I get r1. And that's how you go about it. Don't try to memorize which way round things go. You'll always get lost. Just say I'm adding vectors, I put them head to tail. What's the vector I start with? What did I add to it? What's the vector I end up with? Set up an equation and just solve it. Now even in classical mechanics, I believe it was Poincare showed that the n body problem, where n is bigger than 2, can't be solved because it can't be written down as a simple formula because it shows chaotic behavior. It can do all kinds of things. And to think that you can write a function for that is very naive. So it's just a little bit beyond the scope of function to be able to encapsulate the behavior. It's too complicated. And therefore, any kind of simple way of solving this problem in quantum mechanics is completely out of the question. We're going to have to guess a very good solution and then we're going to have to work very hard to get a better solution. And the harder we work, the better it gets. But what's new, that's how life often works. So we're going to have to approximate the solution. Let's have a look. For this I'm calling hydride anion, try number one. I'm not quite sure at this point how many tries we're going to have, but it might be quite a few before we actually get a hydride anion that we actually like. The first try, here's what we're going to do. We're going to take time independent perturbation theory, which I'll review here in a second because that was a while ago. And we're going to use it to try to figure out what the ground state energy of the hydride anion is. We're going to use it to compute the correction to our naive guess of the hydride. So recall now from lecture eight that what we did is in perturbation theory we had a total Hamiltonian which we broke apart hopefully into a big part and a small part. But usually what it is, it's a known part that we know already what the energies are and an unknown part. And we hope that big and small applies, but it doesn't always apply. And so we have to be a bit careful. And then for the Hamiltonian we put it on the wave function. We get an energy on the wave function. And we expand everything out. The Hamiltonian is H naught plus lambda, a parameter, which when lambda is zero it's the solve problem. When lambda is one it's the problem we want to solve where we've turned on the perturbation full force. And then we expand the wave function in terms of lambda. We expand the energy in terms of lambda E naught plus lambda E1 plus lambda squared E2, et cetera. And then we write everything out and we say look, if this is going to be true, it's certainly true when lambda is equal to zero because we've solved that problem. And we know the ground state energy of the known system, let's say the particle in the box or the hydrogen atom, E0. We do know that. That's not open to question. If it's going to solve it for all possible values of lambda, then what's going to have to happen is that the various powers have to match. And that's an organizing principle then to get a set of equations that we can solve. And as I mentioned in lecture eight then, we just set same powers of lambda on both sides. We have to do some algebra, write out all these terms in lambda and then we have to collect them together. What's the zero of power? What's the first power? What's the second power and so on. And the first two anyway, which is all we're going to have to use, thank goodness, for this, is the zero of power was just H0 psi naught is equal to E0 psi naught. That's the solved problem. So good, that came back. If there's no perturbation at all, then that's what we get. And then the first power, we got this more complex equation, H0 on psi 1, the correction to the wave function, plus H1 the perturbation on psi naught, which is giving a different answer there, is equal to E0 on psi 1 plus E1 on psi naught. And then we solved that knowing the first equation. We solved that and we got a very important equation for the correction to the energy E1. And that was an integral psi naught star H1, the perturbation psi naught. The wave function we know now because we've got us, we don't do perturbation theory unless we've got a solution for some related problem. If we don't, we can't profitably use it. So we can assume that we have some functional form for psi naught. And we certainly know what H1 is. We know what the perturbation is for our two-electron system. It'll be the electron, electron, repulsion. And so calculating the energy is down to doing an integral. Unfortunately, the integral is going to be a lot harder than it seems, especially when we have two electrons. And that is because we're going to have to do a six-dimensional integral. We're going to have to integrate over the coordinates of the first electron, which has an R, a theta, and a phi in spherical coordinates. Call them R1, theta 1, phi 1. And then we have the second electron, R2, phi 2, theta 2. And in order to calculate this thing, which is a number, we have to get rid of all these variables. In other words, we have to integrate them out, get rid of them. And that means six integrals sitting there. And that means we have to know the antiderivative of those or figure out a way to do it numerically, one or the other. And then we have to have a way, if we do it numerically, of guaranteeing the accuracy. Because if the integrals, especially if they go out to infinity like they do with R, it can be tricky to figure out when a function is getting small, but it isn't zero, how much area is left under the curve if you stop, if you say, look, I'm tired of integrating out here. This is taking too much time. It may be small, but because it goes out a long way, it may be a slightly bigger than you thought, and that may influence the accuracy. Luckily, we can do these integrals by hand. Unluckily, they're going to be difficult. And so I'm going to spend some considerable time going through this like a tutorial rather than leaving all of it as a practice problem to do. Now, without the repulsion term, we could write our solution as a product of hydrogen orbitals, just hydrogen 1s orbitals. And the, so here I've written that. I've written psi 0, our unperturbed function, is a product of the 1s wave function for electron 1, which I've written psi 1s R1, and then psi 1s R2. And the product of the wave functions means that the energies are additive, and I've written that in the next equation here. E0 is equal to E1s for the first electron, plus E1s for the second electron, and that's equal to minus EH, the heart rate, because it's equal to 2 times the ionization energy. The energies are additive because the Hamiltonian is to noninteracting terms. I can break the Hamiltonian into a first term that only has to do with electron 1, and a second term that only has to do with electron 2, and I can prove that the two terms commute, and therefore the energies add up. And that's just what we found with the particle in a box, for example. Now what we have to do, however, is we have to add in our messy term, the electron-electron repulsion, R1, 2 in there, and unfortunately, that wrecks everything. Here's our correction, then, that we have to calculate, and I've written a shorthand here. I've written the integral of d vector R1, because at this point, these equations are going to get very long, and I will do it systematically, but I don't want to write a 3d integral over d phi and R squared sine theta d theta dr and so forth for all these equations, because they won't even fit on the slide if I start doing that. We have to take our wave function, which has the coordinates of 2 and 1. We have to sandwich our perturbation in between, and now we're very thankful that we've got atomic units, because our perturbation is just 1 over R1, 2. It's just the distance between the two electrons. That's our perturbation. And we know we've got formulas for the solutions for the hydrogen atom for the 1s orbital. We wrote those down before, and we have to do these two 3d integrals one after another. We have to integrate over all the coordinates dr2 and dr1, and if we figure that out, then we can add that to the energy that we got from the two 1s terms, and we should get the energy of the hydride anion then corrected. And we know this energy is going to be positive, because the two electrons repel each other, and so that's what we expect. And as I said, the shorthand d vector R just means dx, dy, dz. Don't worry about it at this point. We'll get to it when we actually introduce all of the spherical coordinates, and we'll do the integrals properly. Now, however, we've got a problem. We've got this thing R12, but we're integrating over either R1 or R2, and that means this unknown thing, 1 over R12, the distance between them. We have to reformulate that in terms of something that depends on R1 and R2 and not anything else that we can't figure out. So otherwise, well, it could depend on theta and phi too, but it has to depend on the variables that we're integrating over. We can't just leave it as 1 over R12. We don't know what the antiderivative of that is. Let's fix then, and this is a trick that's quite important. So let's fix the first electron along the z-axis. Wherever it is, we're going to rotate it back so it's along z. That's quite important, and that won't, as long as we rotate everything, that won't change the energy at all because the energy doesn't depend on what's north and south or east and west. It just depends on the distance between the two electrons. And so we're always going to put the first electron along z, and that way the angle between the vectors for the first electron and the second electron is going to be the angle theta in the coordinate system of our integration for the second electron. Without this trick, this problem gets extremely messy very quickly, and so you have to convince yourself that it's legitimate to do this. Because when you make an argument like this, if in fact it's illegitimate you did something wrong mathematically, it changes the energy, then you're going to have a big mess on your hands. So you have to be extremely cautious, especially when you start out a problem. If you make an assumption, you have to verify that it's okay before you start doing the calculation. Otherwise, you waste the rest of the afternoon calculating something that turns out to be nonsense later on. And sometimes it's not so easy to see, because it's not so easy as saying, well, three is greater than two, I know that. It can be a lot deeper than that to figure out whether something is okay to do or not, and it takes some experience sometime. So what I'm going to do now is stop at this point, because that's enough material for us to digest in one go. And next time what I'm going to do is I'm going to introduce the coordinate system for the two electrons, show you how to calculate the distance between the two electrons in terms of R1 and R2, no matter what they are, then put in the functions, and then show you some tricks for how to do the integral, because even when we get it in terms of R1, R2, when you're integrating things with the square root of something and the denominator of something, it can be tricky to figure out what the antiderivative is. And if we didn't do it in a nice coordinate system, we'd be totally dead. We'd never be able to figure it out if we did it in Cartesian coordinates. So this is a round problem. We definitely want to use spherical polar coordinates to solve this problem, and not some coordinates like for a box, which is not what this problem is. So we'll leave it there and pick it up next time when we talk about the law of cosines.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:00:55 Symmetry 0:02:23 Orthogonality 0:05:53 The Variational Principle 0:23:17 Atomic Units 0:34:25 Notation 0:37:29 Hydride Anion, Try #1
10.5446/18894 (DOI)
Hello. Today we're going to continue on that infamous cesium problem. We had all those transitions that we were supposed to assign the quantum numbers for. And we were asked if we could estimate the ionization energy or ionization potential, same thing. And of course we can, otherwise we wouldn't ask that. Today then we're going to finish that problem, the energy level diagram problem. We're going to talk about spin orbit coupling a little bit. And then we're going to talk a little bit about multiple electron atoms and the poly principle as kind of an introduction to much more complicated systems that we're going to be dealing with in the future. Recall that we had assigned the two resonance lines for cesium, those are the two strongest lines in the emission spectrum, which were from doublet P3 halves and doublet P1 half to the ground state, doublet S1 half. And the splitting between the P states was a very large value of 554 wave numbers, which is the spin orbit coupling for the 6P electron. So let's pick up then where we left off. We had this fragment of the energy level diagram that I've redrawn here on slide 414. And we had these two transitions assigned, 11178 and 11732. Now we look at the other sets of lines. And what we look for is whether there's a difference of 554. And we start with the set of lines that there's just two of them and then there's a semicolon. So this problem, the key is, especially if you get this on an exam, is that the semicolons mean something. That means that they're giving you a hint and ordering them. If they just give you a list of lines without any kind of ordering, the problem is far, far harder to do. And they didn't want it to be that hard, but as plenty hard as it is. If we look at the other two lines, 7357, and we subtract 6803 from it, those two went together, that difference is 554. And what that means is that there must be some level that is going to both those P states. If it's just two, that means it's an S state. And so what we think is that down here we've got 6S. And the P is coming down here, so that's the 554 from those. And that's quite a big energy difference. And then closer to the P is the 7S now up here. And it can make a transition to both of those. That's allowed just like right, either way. And so if we're above, we can emit. And we emit those two lines, go to there, and then those two both go to the bottom. So these things can happen sequentially. And of course, when we look at the spectrum, we don't have any kind of time resolution usually. We're just looking at all the stuff that comes out. We don't keep track. It's just developing things on a CCD camera or on film. If we assume that's a 7S, then we have now this picture on slide 416. We have the 6S, the 6P split into two levels. The 7S is a single level, again, double it S one half. And we have those two lines, 7357 to the lower state, and 6803 to the upper state. Now, what we have by adding these numbers up is we have the energy difference between 6S and 7S. And that's how we're going to estimate the ionization energy in a little bit. The other sets are grouped into three. And what I've done then is I've just taken the three and I've taken all possible differences between them. One minus two, one minus three, two minus three. I've arranged them as positive numbers. Thirty-three, twenty-one minus twenty-eight, sixty-five is 456, wave numbers. I haven't seen that. Thirty-three, twenty-one minus twenty-seven, sixty-seven is 554. That I have seen. So that's the hint that whatever is involved in these three is going to the same two there, these central guys in this scheme that are my linchpin for figuring out how this stuff works. And then the other one, 2865 minus 2767 is 98. So that's another dud. As far as I'm concerned right now, I don't know exactly what the 98 or the 456 means. But I can guess if there's three of them that it involves a D term. Why? Because I've got doublet P, one-half and three-halves. When I go to doublet D, I have by the Clebsch Gordon three-halves and five-halves. Well the three-halves can go to both three-halves. That's delta J equals zero and the one-half. That's delta J minus one. And the five-halves can only go to the doublet P three-halves because it's not allowed to go from five-halves to one-half. So I expect to see three lines. I expect to see two of them from the lower D state to the two. Those two should have a difference of 554. Well there is a pair with a difference of 554. So I'm going to assume that's what that is. And then the others have to come out to be the other difference. And so let's have a look then at what that means. Well let's do the same thing with the other set of three. Why do they have a much larger value? Well they must be from another D state that's higher in energy coming down again. And if I take the 11,411 minus 10,900, I get another value I haven't seen, 511. But if I take the 11,411 minus 10,867, bingo, 554. There's a level again coming down of those two P states. And if I take the third difference, I get a splitting of 43. The 554 is the spin orbit splitting in the P state. I expect the D state to be lower than that. And so I'm going to sort of assume that the small value is the splitting in the D state. And that's how I'm going to try to assign them. Now we don't know what the quantum number of the D level is. We've got 6S, 6P, 7S. That's pretty much set. And then we've got 5D and maybe 6D. But we have to figure out if 5D is below 6P in energy. So it could be going this way down. Or if it's above in energy. And the way we do that, so we had to do a bit of thinking here. It's probably 5D, but we don't know for sure if it's below or not. But if we do put them the 5D below and then we assign the transitions to get the 554, then what we find is that the spin orbit splitting in the D state there would have to be 456 wave numbers. That would be it. And that's far too big for a D state. So you have to know a little bit of trivial knowledge in order to get it for sure. And that is that it shouldn't be that big. And therefore it's the other way around. It's the 90 and the 40 that are the splitting in the D state. And in fact both the D levels are higher than the 6P. And if you make that assumption, then we get the complicated diagram on this slide here. We have the ground state, the 6S. We've got the 6P states, those two. We've got the 7S. Then over here we've got the two states in the 5D. And then we've got the two states in the 6D. And we've got all these transitions assigned. And we've got all the quantum numbers assigned. And so assigning all the quantum numbers you can means assigning every single one of them for every single transition in this case. Now the question is how are we going to figure out the ionization potential? Well, if we didn't know about that formula minus R over N minus delta L quantity squared, we would never get anywhere. Because if we assume that it goes like hydrogen with 6 and 7, we get totally the wrong answer in this case. We have two ways of estimating the ionization energy. And I'm going to estimate the ionization energy both ways. I'm first going to use the S, the 6S and the 7S. And then as a check, I'm going to use the 5D and the 6D. And we'll learn a little bit about how to do that because these are split. And so the question is which one do you use? And the answer is neither of them. You have to figure out how to figure out what the center of gravity is for those transitions. If we add the 11178 and the 7357, what we get is 18,535 wave numbers for the difference between 6S and 7S. So that's delta E in wave numbers. And now we have that formula and the difference then must be minus R times 1 over 7 minus delta L quantity squared minus 1 over 6 minus delta L quantity squared. And this is a quadratic equation basically to solve for delta L. And we know the value of R, 109737, we should actually use the corrected value but for cesium the reduced mass is essentially the mass of the electron. And so the Rydberg constant is so-called R infinity which is very close to 109737. So we'll just use that. And then we have a constraint on what delta L can be because we have to be careful because we can get different roots of a quadratic equation. It has to be real. It has to be positive. And it has to be less than 6. And with those constraints on the possibility of the solution, what we find is the delta S for this cesium atom is 4.15. This is an enormous defect compared to 6. It's subtracting off this huge number. Well, that actually makes sense when you consider all the electrons that are around there and the penetration to the nucleus. And then we can figure out the ionization potential from the 6th state because then we can just put in n equals infinity for the state, take the difference. And that's just R over 6 minus the quantum defect, quantity squared. So we take 6 minus 4.15, we square it and divide 109737 by that. And we get 32,063 wave numbers. That's our estimate for the ionization potential of cesium based on just having two levels there. That's a little bit dicey because usually you'd like to have many more than just two to see what the trend is. Now let's also try to get an estimate from the D states. For one thing, that's going to confirm that the quantum numbers that we've assigned are correct. And for another thing, it's probably going to give us a better value because there's less distortion in the D states. They behave more ideally. But in order to do so, we've got these two sets here. This one's split by 90 something and this is 40 something. We need to have two levels which are sort of the unperturbed D state energies before we have the spin orbit coupling. So we didn't have to deal with that with the S states. And the way you do that is you have to weight the two states by their multiplicity and you have to take the weighted mean. Though the three halves has more sub states than excuse me, the five halves has more sub states than the three halves and we get a ratio of six to four for the states or three to two. And therefore, I've shown on these two equations the difference between the 5D state and the 6S state because I only have the difference. I don't have the absolute is two times the difference to the lower level which is 11178 plus 3321 plus three times the difference to the upper level which is the same thing plus the 98 divided by five or I could have taken six and four divided by 10 if I'm actually counting the states. Therefore, the energy difference, the correct zero of energy for the 5D versus the 6S is 14,557.8 weight numbers higher. And if we do the same thing with the 6D versus the 6S, we get a difference of 22,614.8 weight numbers. And now what we can do is we can take the difference between those two numbers. And then we have the difference between 6D and 5D because whatever the difference is to S drops out of the equation and using that difference of 80,57, we can then solve for the quantum defect in the D state by solving minus R times 1 over 6 minus delta sub D quantity squared minus 1 over 5. And again, we have a constraint that the solution be real, that it be positive, and that it be less than 5. And the allowed value then of the quantum defect in the D state is 2.43 which is less than S. And recall, I said the quantum defect gets smaller as you go out. It becomes more ideal. Using that, we can get the ionization potential. We first take R over 5 minus the quantum defect squared. But then what we have to do is we have to add the energy it took to get up there from the 6S. So we now got an estimate from here from this formula. And then we've got to get up there. So we take that plus 11178 plus 3321 and we get 31,198 wave numbers which is a little bit different than the S state. But close, very close considering the way we're doing this with this approximate formula and two levels. And so really comforting if you get a result like this when it counts and you're going to get a grade on it. In fact, the literature value is 3.894 electron volts which converts to 31,324 wave numbers. Therefore, the estimate from the D states with their smaller quantum defect is closer to the accepted value. What I would say is that if you haven't been trained to look at problems like this and you've just given a problem like this cold and you just know something about the hydrogen atom, a problem like this is pretty much nearly impossible. And once you know how to do them and how to look for these common differences, then pretty much you can get everything done. Although if it's not an alkali metal, it's going to be much, much harder and it just depends whether you'll be able to estimate the ionization potential or not, you may need more data. If in this problem we had more data, what we try to do is we try to organize the data rather than just solving two equations and one equation and one unknown rather and then inferring the ionization energy. We try to make a plot where the ionization energy would be the intercept and we would try to organize our variables so that the plot were linear and then what we do is we check whether the plot is linear first so that we know we're doing the right thing and then we would zoom in and get the ionization energy. Okay, let's take a little bit more detailed look at spin orbit coupling. This magnetic interaction. We never included it in the Hamiltonian. We had the kinetic energy of the electron. We had factored out the nuclear motion. The potential energy, the electrostatic energy. And then we said later, hey, there's this magnetic thing but how could we include it if we wanted to include it? We argued that the electron sees a magnetic field from the nucleus going around the other way and that's why there's only a spin orbit splitting in non-s states. But we could do a little bit better than that because we know that the energy of a bar magnet or any dipole, an electric dipole in an electric field or a magnetic dipole in a magnetic field is minus mu dot b or minus mu dot e. And if the magnetic moment is proportional to s, the spin, and if the magnetic field is proportional to l, the orbital angular momentum, then we get an energy term which has some number, which I'm going to call beta, times l dot s because that should be the dot product of the two and energies are often dot products and forces are often cross products and that's the way things play out. Then we can cast the dot product in terms of things that we actually know. We want to get it in terms of quantum numbers. So we can do a trick. We can take J dot J. That's a scalar product of the total angular momentum of the atom with itself for J squared and substituting for J l plus s in each place, we get l squared which is l dot l plus s squared which is s dot s plus 2 l dot s. And then we just solve that for l dot s which is going to be our energy term there. And we get one half J squared minus l squared minus s squared. And now if we know the quantum numbers, we can just put in numbers for those things. And we don't have to worry about what the operators are, we just put in numbers. Let's apply that to the doublet p three halves and doublet p one half states. We know all the quantum numbers. We know s, we know l, we know J. So we're going to take the expectation value of the spin orbit energy in these states. Well, what we find then when we put in the quantum numbers, we get beta over 2, beta is something that has to do with how strong the interaction is. We'll get to that in a second. Then we get for J squared, J, little J times little J plus 1. That's how that works. Minus little l times little l plus 1 minus little s times little s plus 1. If we do that for doublet p three halves, we simply get 15 over 4 minus 2 minus 3 over 4, beta over 2. And so we get beta over 2 times H bar squared. And if we do it for 3 p one half, excuse me, doublet p one half, what we get is minus beta H bar squared. And you can see one of them is moved up by a certain amount, and the other is moved down. And that's why when we did the problem, we took the weighted mean because they're moved by different amounts depending on the multiplicity so that the center of gravity remains the same. The shift, whatever it is, this energy term beta should depend on z, the atomic number because after all we saw with cesium, it was much, much bigger than sodium. And we would expect a bigger nucleus to generate a bigger magnetic field. But it's going to take us far too far afield to actually calculate this from first principles with the Thomas Fermi procession and all this other stuff that comes in. So I'm just going to quote the answer that H bar squared beta, the energy term in front of the L.S, goes like z to the fourth and then a magnetic conversion factor and then the g factor of the electron, the Bohr magneton. And then this term that comes from doing some angular momentum algebra, 1 over n cubed times a not cubed times L times L plus 1 times L plus one half. It's quite a long and involved formula. But what it shows is that as n and L increase, the splitting decreases and we saw that again in the cesium, we saw that one of them was 90 something, the other was 40 something, that's higher up. So that makes sense. Okay, let's talk about two electron atoms. We did hydrogen, we did alkali metals which are kind of a dodge because they're just basically one electron and then this cloud of charge inside. And now if we want to actually do a two electron atom, then we have to start doing some serious work because we have to take into account what's going on with the two electrons. And that isn't so easy to see until you've had some experience with it. With the hydrogen atom, we noted that we had a 6 plus 1 dimensional problem when we started out. Six dimensions in space, the three coordinates of the proton, the three coordinates of the electron, and then one dimension in time. And what we decided is, well, we aren't solving for time dependence, if we just want the static ground state time independent answer, we could get rid of time in the time dependent Schrodinger equation and just calculate the energy eigenstates because the energy eigenstates are the states that don't change in time. That's why they're special. And then how did we get rid of the other three coordinates? Well, recall, we factored out the center of mass motion which is just the whole atom drifting around. And then what we were left with is just the relative distance between the proton and the electron. And then at that point, we can make a mental dodge and say, well, look, the proton's fixed, the distance is to the electron, and at that point, we just start thinking about the coordinate in terms of where the electron is in the hydrogen atom. But for hydride, let's say H minus that has a proton and two electrons, big, fluffy thing, very strong reducing agent. Or helium, not quite so big and fluffy and not very reactive. The wave function is six dimensional even after we take out the motion of the nucleus. Why? Because we've got two electrons. And so as I remarked earlier, we can't comprehend how to plot six dimensional things very well. And so we promptly get rid of all that complexity and we say, look, we have to take some kind of a simpler approach that's more tractable. We aren't going to write a wave function as a function of six coordinates and then try to figure out what's going on, even if we can solve that sort of thing, which even three where things separated nicely wasn't so easy and in this case, they aren't going to separate because even for a three-body problem, they don't separate. Oh, we can't disentangle the electrons from each other so simply. So what we do is what we always do when we don't like something, we throw it away. That's the first step. I don't like that term in the equation. Well, okay, let's assume it's small. And then if it isn't small, let's try to figure out how to fix it later. But first, let's get an answer that we can get. And so in this case, the term that's a pain is the electron, electron repulsion, the fact that these two electrons are buzzing around somehow and whenever they get close to each other, they really repel like crazy because they can, in principle, get very close after all their almost geometric points. So that would give an extremely high energy and that can happen anywhere in space where they are. So they try to avoid each other and then they try to also cluster around the nucleus. And if we just ignore their repulsion, we can just treat them separately. I don't see you. You don't see me. We both see this guy. Let's try to get an estimate then and then correct it. In that case, if we ignore the repulsion, then the energy is just the sum of the energy of this electron interacting with the nucleus and the sum of this electron interacting with the nucleus. And if the energies add, that means the wave function is a product because that's basically what we did with the particle on a box. We said, look, the EX and EY energies just add up. That means the wave function factorizes into a product because those guys have nothing to do with each other. And if we turn off the repulsion between the electrons, they have nothing to do with each other. There's no forces between them. And so for these two electrons, instead of a wave function, let's say a function of r1, r2, which is six coordinates, we break it up right away into a product, psi of r1 times, or psi1 of r1 times psi2 of r2. We just right away assume that. And that's our starting. Once we assume a product of wave functions that depends only on one set of coordinates, what we're doing essentially is we're putting each electron into its own orbital. This is an important concept. I've mentioned the term orbital before. For a hydrogen atom, there is only one electron. But for multi-electron atoms, there are lots of electrons. And the wave function is a big mess. But we don't want to deal with a big mess. So what we do is we put each electron into its own orbital. This is an approximation. It's a pretty good one in a lot of cases. And it's called the orbital approximation. It lets us consider each wave function for each electron to be only dependent on the coordinates of that electron. You can see right away that that can't be right, because how do you know what this one's doing if you don't know where this one is when they're repelling? But nevertheless, it can be a pretty good approximation. And it's the one we use. Now we don't have to stick with the hydrogen orbitals for the solution. We can. Why? Because the hydrogen orbitals form a complete set. Remember what we learned about the eigenfunctions of any Hermitian operator that they're orthogonal to each other. So we can consider them as different directions. And we can make up any function we like as combinations of these hydrogen functions. And since we know what they are, and we can write them all down, that can be really attractive. But we don't have to stick just with them. We can take any kind of functions that we like, a Gaussian function or any other thing. And then what we can do is adjust the electrons in sequence going round and round and round until we get a better solution. We'll talk more about that later on in this series of lectures. That's the so-called self-consistent field model. But you can understand what it's going to amount to pretty easily. I've got all these electrons. They're all over the place. They're repelling. They're and so forth. And they're attracted to the nucleus. And I don't know what's going on. And it's a 27-dimensional problem. What I do is I put all the electrons into shells, into orbitals, like they would be in the hydrogen atom, but with an appropriate value of Z. And then I take one electron and I take all the others and I smear them out into charge. I just forget the fact that there are real wave function there. And I just smear them out into some charge distribution. And then I take the one electron I've got and I've got this weird charge distribution from all the other ones and the nucleus. And I take this electron and I try to optimize its wave function until it lowers the energy so that it's more ideal. And then I freeze it and make it into the charge. And I pick another electron so I have always N minus 1. And then I have one left over and I go round and round and round. And when I can't improve it anymore, then I say I've got a self-consistent solution. Doesn't mean it's correct. In fact, it's not correct because the problem, smearing it into a charge is pretty good. But it's not the same as taking into account how things are actually moving. So-called electron correlation. And in fact, other theories are much better at doing that. Once we're done, let's say we've got to go back to two electron atom. Let's forget the 27 dimensional one. If we go back and we've got two electrons in the atom, we can treat the electron-electron repulsion as a perturbation. And then we can try to adjust the energies to see if we get something that's more acceptable. That measures better with these spectroscopic lines, which of course can be measured to many, many digits, which is a great test between theory and experiment. If you look back at the slides on perturbation theory, remember that we start- we introduced this parameter that I called lambda. And when lambda was zero, we had our unperturbed solution that we had the exact solution for. And then when lambda is equal to one, we have what we're trying to get. And we tried to connect piece together zero and one by taking a power series in lambda on both sides of the equation and matching the powers. Because if it's going to match all the way through, the various powers should match. And after we match the powers, we get some equations. And then to get rid of lambda, we just set lambda equals to one. And then it's out of there. And then we have some equations to solve. And if you look back at that, then there's a correction to the energy right away when you have a perturbation, as long as it doesn't- as long as it has nonzero matrix elements. And the correction of the wave function is higher up. So you need a higher order calculation if you're actually going to correct the wave function itself for the atom from this product to something better than that. But there's another player in the game now. And we have to take a look at that. Before the spin, okay, one electron, spin orbit coupling. But now the spin is going to play a major role if we got two electrons. Because that's going to dictate which states we can even have. And we have to be very careful. And the spin is kind of an annoyance because it doesn't have these variables r, theta, and phi. It's just this quantum number, m sub s plus or minus a half. But because they can't have the same quantum numbers, it dictates everything. So there are two principles here. The poly exclusion principle is usually stated in the following terms. That two electrons in the same atomic orbital or with all the same spatial quantum numbers, same spatial orbital have to have opposite spin. They cannot have the same spin. If one of them's up, the other's down. It's not quite literally true, but we'll see what it means. And this turns out to be a result of a much deeper symmetry principle, which is this. Just the poly principle, which is a principle that applies to all fermions, fermions, electrons, or fermions. And the idea is that if you take the total wave function of your atom and you swap any two identical particles to fermions, then the wave function changes sign. That's okay that the wave function changes sign because when you square it, the probability density is the same. Of course, if you're exchanging mentally two identical particles physically, you could argue you've done nothing at all. So of course, the electron density, the probability distribution of charge shouldn't change. And of course, it doesn't change. But the wave function can change sign. And it does if we swap. So if we write a function and we swap what we call electron one with electron two everywhere in the function, then it should change sign of the wave function. If it doesn't, it's not allowed. Okay, as I mentioned, the probability density stays the same. And then there's another one, a boson, which is of which a photon is an example. And that one, if you swap them, it doesn't change sign. It just stays the same sign. And it appears that these two particles have completely different properties mathematically in terms of our equations. One of them always changes sign. The other one doesn't change sign. And you could ask, well, why is it always one or minus one? Maybe it could change by e to the i theta. And it would still have the same probability density if the other one changed by e to the minus i theta. And everything would come out in the wash. And the answer is maybe there are some things like that called enions. And if you're intrigued about those, you can look them up. But as far as we're concerned with atoms, it either changes sign or doesn't. And since we're only concerned with electrons, what we're really concerned about is this change of sign in the wave function when we swap the thing. Now, it's the total wave function that changes sign. And the wave function, when we've got more than one electron, what we have to worry about is the spatial part, which depends on the coordinates, and the spin part, which is just grafted on. But the spin part can change sign too, because if I've got two electrons, if they're both up, then if I swap them, nothing changed. If one's up and one's down and I swap them, I get something different. And so that's no good. If they're both down and I swap them, nothing changed. And so what's going to have to happen is if we have the two electrons with the same spin, then the spatial part is going to have to be the part that changes sign. The spatial part has to be anti-symmetric. If the spin part is symmetric, and the spin part has to be anti-symmetric, if the spatial part is symmetric. If the two electrons are in the same orbital, the spatial part is symmetric, because if I swap them, they're the same orbital. That has to be completely symmetric. And so that's why they have to have opposite spin. That's the exclusion principle. Let's have a look then at how this plays out. We never included the spin explicitly with hydrogen. We didn't say, hey, in this single electron, was it, let's go back and figure out if it's been up and spin down. And that's because unless you look extremely closely, it doesn't matter whether it's been up or spin down. Only if you worry about the proton spin as well, does that matter. But when we've got two electrons, it is the major thing to keep track of. And you have to keep track of it, and you have to learn how to do it. Let's have a look then at how to do this. Let's abbreviate our wave function here on slide 437 as just psi of 1, 2. And 1 is electron 1, and 2 is electron 2. And it's going to depend on the spatial part and the spin part. Then the anti-symmetry constraint of the total wave function for the atom means this. Psi of 1, 2 is equal to minus psi of 2, 1. That's it. If I have a wave function and it doesn't satisfy that, it's no good. I can't use it. And that's going to throw out a lot of possible solutions. So that's for the total wave function. Now suppose the two electrons have the same orbital part. Let's call that psi 1 and psi 2. They have the same function for each electron. It could be e to the minus r over a naught or whatever. That's the same. That part's symmetric because if I swap them, it's the same. And so the spin part has to be anti-symmetric in that case. Because typesetting books, especially with arrows, historically is incredibly tedious, especially before computer typesetting. Usually instead of using these arrows, what we use is alpha for one spin state and beta for the other. And we just write them in order. And the order is the order of the electrons. So what we can do is we can write the four combinations like this, alpha 1, alpha 2, as they're both up, alpha 1, beta 2, beta 1, alpha 2, and beta 1, beta 2. Those are the four combinations. And the alpha, alpha, and beta, beta states are symmetric. As I said, the other ones are neither symmetric nor anti-symmetric. And that means the other ones are no good. And as we saw when we took two electrons and coupled them, what we have to do is we have to make a proper value of big S for those. And remember one value went with the triplet and the other went with the singlet. And that's exactly what we have to do here. So taking the same thing that we did when we took two electrons, spin one halves to get big S. We get a symmetric combination, which is root 2 over 2, alpha beta plus beta alpha. And the other combination, which is anti-symmetric, which is root 2 over 2, alpha beta minus beta alpha. And only the first combination is symmetric, the second is anti-symmetric. And you can see that if you actually substitute, if you just swap them around. You have to swap them around and then look carefully at them. And you'll see that the second one changes sign. And I've shown that here on slide 440. Since the combination is symmetric, then 3 out of 4 of these spin wave functions are no good. The alpha alpha, beta beta, and alpha beta plus beta alpha are all no good. So those were the S equals 1 or triplet states. And it's only the anti-symmetric singlet state, S equals 0. So it's not really that the two electrons, one is up and one is down. Remember, this is quantum mechanics, so it's always weirder than we think. It's a 50-50 mixture of I'm up and you're down minus you're up and I'm down. That's what it is. It's not just one combination because that wouldn't have the right symmetry either. And that part then, what I've called sigma minus here can pair with the spatial part. And we get the overall wave function for this two electron system if they're in the same orbital, which they would be, let's say, for helium is psi 1, psi 2 times sigma minus 1, 2. And that will be where we pick up next time when we actually try to figure out, okay, we've got these electrons. Let's try to actually calculate some energies for these atoms. Let's have a look at how it plays out and how we can take into account the repulsive terms between the electrons. And that's quite an uninteresting little exercise. There's a lot of mathematics, but most of it we can sort of bludgeon our way through with the help of some friends that know how to do a lot of integrals that we're going to have to be able to do to get them down to a number at the end and then see what that number is and how much the energy shifts. So we'll pick it up there next time through.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:00:51 Cesium 0:19:37 Another Look at Spin-Orbit Coupling 0:23:42 The Energy Shift 0:25:00 Two-Electron Atoms 0:37:06 The Pauli Principle 0:41:31 Spin and Space
10.5446/18893 (DOI)
Hi, today in chemistry 131A we're going to talk about hydrogen wave functions, quantum numbers, term symbols, and transitions in atomic spectroscopy. Remember that the solution for the hydrogen atom gave us three quantum numbers, N that had to do with the energy, L that had to do with the square of the total angular momentum, M sub L which was the magnetic quantum number which was the projection of the angular momentum on the Z axis. And then there was another number, M sub S, that came from the intrinsic magnetic moment of the electron itself. And so if we have these four quantum numbers then we know everything that we can know about an electron in a hydrogen atom. However, once we have more than one electron then there's the well-known Pauli exclusion principle which you've probably learned from freshman chemistry. And that is no two electrons can have all four quantum numbers the same. So if N, L, and M sub L are the same, for example, the electrons in a 2PX orbital, then M sub S has to be different for the two electrons. One has to be so-called up and the other has to be down. In other words, they have to be paired. We'll see a little bit later what that actually means. And so as I said, electrons in the same spatial orbital have to be paired up magnetically with respect to the spin. Let's look at the possibilities for a neutral carbon atom with six electrons. The configuration from the periodic table for a carbon atom is 1S2, 2S2, 2P2 and that means that there's two electrons in the P shell that can occupy any of the three P orbitals. The other orbitals are filled. And the L equals one state for P, M sub L can be 1, 0 and minus 1. And we can figure out then the limits on L and S, which depend on each other because of the exclusion principle. Let's have a look at this. The first thing to note is that only the P electrons have any flexibility. The 1S shell is filled. That's out of commission. The 2S shell is filled. That's out of commission. And it's only the frontier orbitals that we're putting electrons into that have any flexibility as to how the electrons can orient. Whatever's already happened to 1S and 2S has happened and that's water under the bridge. For example then, here on slide 389, we have all these possible configurations of the electrons in these 3P orbitals are allowed. On the left, we've got two electrons in the M sub L equals plus 1 orbital and they have to be paired because of the exclusion principle. Or we could have one in the 1 orbital and one in the 0 and they could be also paired, one up, one down. And we could have one in 1 and one in the minus 1. And we could have them parallel as well, one in the 1 and one in the 0. All those possible configurations of the electrons in the P orbital shells are okay. They're all allowed. But on the bottom of the slide here, none of these are allowed. In the first one, we have M sub L equals plus 1. There are two electrons and they're both up. That means they have all four quantum numbers the same because N is 2, L is 1, M sub L is 1 and M sub S is plus 1 half for both of them, not allowed. Same thing with the others. Whenever we have two electrons in the same orbital, they can't have the same spin. Remember that we had a shorthand way to keep track of these various atomic levels which was the term symbol which gave us sort of a summary of how L and S would magnetically couple to give the total angular momentum J and would allow us to keep track of which transitions were allowed. Remember that the term symbol is 2S plus 1 on the left hand superscript, capital L which was SPD and so forth for the central letter. And then a value of J which is the way S and L couple and they can couple in various ways. They're various allowed values according to the Klebsch-Gordon series that we derived. 2S plus 1 is called the multiplicity but keep in mind that it is only really the number of states there if, in fact, L is greater than or equal to S. Because if L is less than or equal to S then there could be less states allowed. So if S is zero it's called a singlet. If S is a half it's a doublet. If S is one and a triplet. However, if we have 2S one half where L is zero then it's not really a doublet. There's only one state because there's only one level there for the electron to occupy. So it's not really a doublet in that case. Let's try another example and have a look at these then and see which values are allowed. So this is practice problem 20. What terms can arise from a neutral carbon atom with the electron configuration 1S2, 2S2, 2P2? Well, remember the way we approach these problems we have to figure out the possible values of L based on the values of little L for each electron. We couple those together. Then we figure out the possible values of big S based on spin one half for the electron. We couple those together. And then we use the Klebsch-Gordan series to get about possible values of J which naively go from L plus S down to the absolute value of L minus S. So if we can figure out these possible values of big L and big S we should be able to come up with the term symbols. But what we'll see in this case is that it's a little bit more complicated than it might first appear. And the reason why is that some of the values that we'd like to pick poly excludes from possibility. The possible values of big L with 2P electrons are 2, 1, and 0. Those are the three you can get from two values of little L equals plus 1. And the possible values of S, big S, the total spin angular momentum are 1 and 0. That's what you can get from two spin one half electrons. Therefore, if we just took the Klebsch-Gordan series it would seem that J could be well L plus S is 3 down to the absolute value of L minus S is 0. So it would seem that J could be 3, 2, 1, or 0. But we can't have J equals 3 when we have both electrons in the same shell. And the reason why is that if we had J equals 3 we'd have to have L equals 2. So let's put the two electrons in the plus 1 orbital. And then to get 3 we have to have S equals 1. So we have to have them parallel. That means that we have to have them both in the same orbital there. And then we have to have them with the same spin. But that's not allowed because if they're in the same orbital they have to have opposite spins. So we are allowed to take the maximum value in that case. If we had an excited carbon atom, and that's why some books give you these weird problems where it's an excited atom, where one of the electrons is in a 2P and one's in a 3P, then you just total them all up because there are different shells. And so the poly exclusion principle doesn't apply. They aren't in the same orbital. In this case, which is actually more important to work out because usually you want to work out the terms for the ground state of the atom to figure out what kind of spectroscopy you can see. You have to be quite a bit more careful how you do it. Therefore, if we have L equals 2, we have to have them paired. So we have S equals 0. The electrons have to have opposite spin. And the term that arises from that is singlet D. So if it's singlet D, then that means that J has to be 2 because S is 0. 2S plus 1 is 1. S is 0. D, so the series stops with J equals 2. And so the term that arises is singlet D2. For L equals 1, we're okay because then we can have one electron here and the other over here somewhere. And in that case, we get a triplet P term. And there are three values because now S is 1 and L is 1. So we get triplet P2, triplet P1, and triplet P0. And then the last term that we can have is L equals 0, S equals 0. And therefore, the only possible value of J is 0. And this would be called singlet S0. What you have to do then is you have to go back and say, what were the possibilities of the states I started with? Recall that we did that when we did this before with just coupling two electrons. We made sure that we accounted for all the possibilities that were there and then make sure that our terms cover all those and no extra ones. And like anything, like doing crosswords or Sudoku or anything, this takes some practice. But it can be quite a rewarding puzzle to work out to make sure that you've got it right, that you aren't including any non-allowed states in the terms. If you've got an open F shell with multiple electrons or an open D shell with multiple electrons, working out the allowed terms can really require a quiet room because it can get quite complicated to keep track of which ones are allowed and which ones are excluded because you can't have two electrons in the same orbital with the same spin. Suppose on the other hand we had five P electrons, say, flurring. If we took all five electrons and then we said, boy, this is a mess because all these orbitals are filled and so there's these millions of combinations that are no good because we can't have the electrons that way. It would be very, very difficult to do and it would take a long time. And so the trick we can do is we can treat the missing electron, the hole as if it were an electron and we can move it around and we can just treat it the same way we would if we had one electron. And that's because once it's more than half full, it's really the holes that are dictating which terms can arise. The electrons are filling all the orbitals. And it turns out that it's exactly the same and so you don't have to do it over. Therefore, the worst case is when you've got three P electrons or five D electrons. Once you've got more than that, it just reduces to a previous case where you have less electrons. So it's no harder. And so for flurring, excuse me, for oxygen, which has two P4 electrons, you get the same terms as carbon that had two P2 that we just did. And likewise, flurring gives rise to doublet P3 halves and doublet P1 half terms. Now I want to talk about the shapes of the hydrogen wave functions. We can plot solutions. We had the equations, but let's just look at the shapes of the wave functions themselves. Not the radial distribution function, but the angular part as well. And it's common practice to plot these then as a contour plot in which we pick some arbitrary contour in space and we say there's a certain likelihood that the electron is inside this contour. And then we can pick a color to code whether the phase of the wave function is plus or minus. Remember it's a wave, so it has a phase and it has a phase in 3D just like a sine wave has a phase in 1D. Sometimes before people used color, they'd mark a plus on one side of the wave function and a minus on the other. The problem with using plus and minus is that if you look at it and you aren't quite sure what it refers to, you might think it has something to do with charge, but it doesn't have anything to do with charge because electrons are always negative. And it's just whether the phase of the wave function is a positive or negative. And in between there's a node where the wave function has to be zero. The thing is we tend to avoid using the true eigenstates for the hydrogen atom and the reason why, which I referred to in a previous lecture, is that if we include I and we've got e to the im phi going around then that's difficult to figure out what you're going to plot because it's much easier to say well I'm going to plot if this number is equal to this thing I'm going to plot that surface where this number is equal to 0.9 or something like that. But if as you go around the phase is changing with respect to I then you'd have to figure out some way to code that. And you might be able to do that by coding in a rainbow, but on the other hand maybe that might be confusing when you include the phase. So as a dodge what tends to be done is instead of plotting the true eigenstates we plot linear combinations of plus and minus m sub l that either give the cosine part or the sine part. And recall from Euler's identity that e to the i theta is cos theta plus i sine theta. So we can pick linear combinations to plot those. And that's usually what's done. We pick then the real part, we throw away the i and we pick the real part and then we just plot the things. However, keep in mind that the plots depend a lot on the particular value of the contour that you pick. Do I pick that the electron has to have 99% probability of being inside this thing or 50% probability of being inside this thing or whatever? Usually 90% is a typical value, but as we saw for the hydrogen atom, if we go out just to the most likely radius of where the electron is, the Bohr radius, there's only a 32% probability that the electron is inside there. So perhaps you could argue that maybe you should pick a more conservative percentage to get a more realistic idea of the size of the spatial extent of the wave functions. So here are some wave functions then plotted. And red is positive phase in these plots and blue is negative phase. And the 1s is the little dot on the left. That's a little red thing. And it just is spherical as we would expect. And the 2s is a bigger sphere. There's a tiny red dot in the center. And then there's a bigger blue area. And that's because as we saw, the radial function for 2s has a node at some point where there is zero probability of finding the electron. And then we get these big fluffy orbitals, the 2pz, 2px, and 2py that look really quite big and distended. And that has to do of course with the particular contours that were chosen to plot these values. Recall that a node is a place where the wave function changes sign. Strictly speaking, it shouldn't just go to zero, but it should be plus on one side and minus on the other side. And a node in a 3D wave is a surface where the 3D wave has a value of zero. The radial nodes are in fact spheres. We saw that for estates there are nothing but radial nodes. And then the angular nodes are angles which turn out to be planes where the wave function is zero. A 1s function has zero nodes, a 2s has one radial node, and a 3s has two radial nodes. And in general, the number of nodes is n minus 1. If we look then at the 2pz orbital which I have here on slide 398, we see that there's an angular node, there's a plane in which the wave function vanishes. It's positive on one side, red on one side, blue on the other side, everywhere in between the wave function vanishes on the plane z is equal to zero. The d functions look like these. Here's 3Dxy looking like a big clover leaf. And you can clearly see that there's a disturbance building up in the angular variation of the electron as it assumes more quanta of angular momentum. That makes perfect sense. And so there are five 3D orbitals and we saw what they were. We wrote them all out but we didn't see exactly what they looked like so here's 3Dxy here with these four nodes. And unlike p, the blue are across from each other and the red are across from each other. p changes sign when you rotate it 180 degrees. This d changes sign if you rotate it 90 degrees. And that's because it's 2 quanta of angular momentum and not 1. Here's a different view of the d wave functions and how they may add up to give us a shell that has spherical symmetry. Four of them are pretty much like clover leafs. They're four things, red and blue. Here they're just gray. And here they're also plotted at a different level of the contour so they look a lot thinner than in the other plot. And then there's this funny one, 3Dz squared, which almost looks like a p orbital. And unlike the others where I can clearly see these planes where this thing is vanishing, this one, especially without color, is a little bit hard to see. But what we have with the 3Dz squared is we have a donut around the equator and then we have this thing that looks a little bit like a p orbital. But the difference is this thing is all the same color. It doesn't change sign. And this donut's all the same color. And you still look at it and you say, well, where are the angles where the nodes are? And the answer is they're at the magic angle. If you look back at the formula for this particular orbital, you'll find that there's an angle tilted where the wave function vanishes. And it vanishes all along that angle all the way out. And so that's how that one works. So it looks a little bit different, but in fact, it's exactly the same in terms of vanishing at values of an angle, just like the other ones do. Now let's talk about spin orbit coupling. We saw that we had these two lines for the sodium spectrum. And they were closely separated. And the two transitions were doublet p 3 halves goes to doublet s 1 half and doublet p 1 half goes to doublet s 1 half. Recall that delta J can be zero as long as J is not equal to zero. The energy levels then break down like this, as I've shown on this figure on slide 401. There are two levels, two excited states above the ground state, these p states, and they're split by an amount I've called delta in this slide called the spin orbit coupling. And what we'd like to do is explore this a little bit and figure out exactly where this comes from. The only difference between the two states is the orientation of the spin relative to the orbital angular momentum because they have the same value of s and l, but J is different. So s and l can add up like a triangle and make J the third leg of the triangle. Then J can be different because there can be several different values that work. And it must be then the energy difference is just whether this magnetic intrinsic bar magnet of the electron is aligned or against the magnetic field that it sees if from the perspective of the electron the nucleus is going around the other way and it creates a magnetic field that acts on the electron. If we know the measured optical frequency difference between the two levels, then we can estimate the magnetic field that the 3p electron and sodium appears to be experiencing from going around the nucleus of the sodium. The magnetic energy difference of an electron spin is delta E is equal to G, the so-called electron G factor which has a value about equal to 2 and then a conversion, the Bohr magneton that converts magnetic field to energy and then B, the value of the magnetic field that the spin is experiencing. That's the magnetic energy that this bar magnet that the electron has is going to feel. So let's do a practice problem and figure out what kind of field it experiences after we figure out exactly what the spin orbit coupling is. What's the spin orbit coupling in wave numbers, in electron volts and what is the apparent magnetic field that the electron in a sodium atom in the 2p orbital experiences? Okay, well the wavelengths of the two transitions are 589 nanometers and 589.6 nanometers and we know the energy difference is delta E is h nu 1 minus h nu 2 and that's hc times 1 over lambda 1 minus 1 over lambda 2 and we can convert that then to wave numbers as just hc times nu bar 1 minus nu bar 2. Those are the wave numbers in centimeters, inverse centimeters and that's, we can just write that as delta nu bar. So we can solve for delta nu bar and we just have to take the inverse of the wavelength which is in nanometers to the minus 1 times 10 to the 7 nanometers per centimeter and to be careful if you're doing an exam you may want to convert from nanometers to meters and meters to centimeters. It seems easy but it's easy to get also a factor of 10,000 off if you go too quickly and happen to go the wrong way with one of the conversions. If you do that then the spin orbit splitting in sodium is 17.28 wave numbers. If we convert that to EV which we do by converting to joules and then we just simply convert from joules to EV by multiplying by dividing rather by 1.6 times 10 to the minus 19 joules per EV, we get a very small value of 2.142 times 10 to the minus 3 electron volts. Keep in mind that the ionization energy of hydrogen to kick the electron completely out was about 13.6 electron volts and here we're talking 2 milli electron volts. So these magnetic interactions in these systems are very, very much smaller energy differences than the main electrostatic potential that the electron feels from the charge. And that's a common theme. Basically magnetic energies are always quite a bit smaller than electric energies. And in the hydrogen atom the spin orbit splitting is very, very tiny compared to sodium. And so it took a lot of detailed work to even work out that there was something there and then to explain it. And it's very interesting historically. It's too bad we don't have time to go through all the thought processes that went through finding the fine structure and hyper fine structure of these systems. But it is very interesting detective work from a science standpoint. Finally then, okay, we know this spin orbit splitting. Let's then use our formula for this, the energy difference of an electron in the magnetic field. We know the energy difference. We know the g value of the electron. We know the Bohr magneton. That's a constant. We look that up. Let's figure out then what the apparent magnetic field is that this electron is seeing. And what we get if we take delta e divided by g times the Bohr magneton is we get 18.5 Tesla. 18.5 Tesla is an absolutely huge magnetic field. If we tried to make a magnetic field like that in the laboratory, we would be very hard pressed. Even if we took tons and tons of wire and wound a gigantic electromagnet, we would not get such a big magnetic field. In fact, the very biggest NMR spectrometer at UCI has a magnetic field of 18.6 Tesla. But the magnet is 12 feet tall and it has several miles of superconducting wire wound into a gigantic solenoid to make this magnetic field, which is enormous. So you can see that because the particles are so close and things are moving so rapidly, that there are very large magnetic interactions on the electron, very large compared to what we can do in the laboratory. And so when we take atoms and we put them in even a pretty strong magnetic field, the energy levels do split and move and do things. But they split only a little bit compared to how big they're already split due to these internal magnetic fields from the nucleus. And that's good because that means that if we have two levels like this and then we turn on a magnetic field and they split, they go like that. And that makes it easy to keep track of what's going on. If they went like this and all over the place, it might be really hard to figure out what the spectrum was doing. If we look at other alkali metals like lithium, potassium, rubidium, and cesium, then because they have a single electron outside a closed shell, they all give rise to exactly the same terms as sodium. And so their spectra are very similar. They have the same allowed transitions as so on except for the absolute energy differences, which can be different because of the differences in the energy of the various orbitals, SP, and so forth. Why are they different? Well, they have different numbers of electrons and they're in different shells. So of course, they're going to be different. And it turns out for these systems, there's a slightly jiggered formula for the energy that works for these single electron systems when there's a single electron outside a closed shell. Sort of like hydrogen, you could think of a hydrogen as a single electron outside an empty shell. And in this case, we can write E, the energy of the state with respect to zero energy, just like with hydrogen, which depends on N and L now, not just N. As being, minus R, the Rydberg constant, divided by quantity N minus delta L squared. And delta L has to do with the other electrons being around changing the value, the apparent value of the energy. Delta L depends only on L, not on N. And that's important because since it doesn't depend on N, if we get a couple of N values, we can solve for delta L and then we can figure out, for example, we can estimate the ionization energy of an atom that's a non-hydrogen atom if it's an alkali metal by this means. Unfortunately, for other atoms, carbon or something else, no simple theory like this is even going to be close to the truth and so there are too many other problems. But at least for a single electron outside a closed shell, there's this slightly different formula. And this so-called quantum defect, as it has been called historically, has a value that is bigger for S orbitals. They have a bigger defect than P than D. And as you go out, it gets to be more and more ideal because as you increase the angular momentum, you're just out there. And so basically what you see is much more like a hydrogen atom because why all the other electrons cancel out with all the charge except the extra charge that makes the atom neutral with this electron out here way out there. But for S, even if you're in a big S shell, you can penetrate in. And so then you see that the energy is not nearly so ideal. So I've said that on this slide, the S electrons can penetrate. And here what I've drawn here is an attempt to try to rationalize why this is so, why we have to have this term. The potential when you get inside looks like a multiple charge because now you have other electrons outside you and so you actually see this gigantic nucleus and so you start diving down in energy. And that's this curve I've marked multiple charge. That's with some value that's not equal to 1. And then as you go way out and all the electrons are inside, then what you would expect is that it would look like a single charge. And on the interior part, I've drawn the same potential but with a single charge now, not with a multiple charge. And then the true potential has to look like the single potential far out but it has to look like the multiple charged one as you bury in and you're inside all the other electrons. So it has to sort of interpolate between the two curves that we've got and in fact that's exactly what that formula basically does for us. Let's look at the emission spectrum now of the neutral cesium atom as a practice problem. And we're going to look at it in some detail. In fact, we're going to look at it probably in enough detail that we won't be able to finish the whole thing in this lecture and that will be good because we'll have time to digest what we did and then come back and have another look at it. Practice problem 22 then is this. The strongest lines in the emission spectrum of the neutral cesium atom are at 11,178 and 11,732 wave numbers. And these lines are also seen in absorption. Wave numbers for related lines in the emission spectrum are 7357, 6803, then there's a semicolon, that's important. 3321, 2865, 2767, another semicolon, 11,411, 10,900, 10,857. One, construct an energy level diagram for the cesium atom and assign as many quantum numbers as you can. Two, could the ionization potential of cesium be estimated from these data? The first time you get a problem like this, especially if it's on an exam, is one horrible moment because you start to wonder if you ever had any experience with doing a calculation like this on a multi-electron system at all. And you might also wonder if I've got more than one electron, how do I know that it's only the outer electron that's monkeying about? What if I start to wonder suddenly, maybe one of the inner ones is doing something too and how can I figure out what's going on? The answer is it's always the outer one that's going and you never can have too excited because if you had too excited, the outer one would have ionized off before that and so you'd be doing spectroscopy on the cesium ion then. So you don't have to worry about that. It's just the outer 6S in this case, electron, that's doing the various transitions here. But we've got all these various lines and we've got to figure out what's going on. So we know that the ground state is 6S1, that that electron, the cesium's on the sixth row, it's in the first column. It's basically what the periodic table is telling us about the electron configuration. And we have those two strong lines. The strongest lines are called the resonance lines. The strongest lines in sodium are those two yellow lines. That's why the sodium lamp is yellow because those are doing all the emission. There are other transitions going on but they're less intense. The fact that those lines are seen in absorption, in absorption you have to start in the ground state. You don't start with the electron up here because if you have the electron up here and it's an absorption, the temperature would have to be, extremely high. And the temperature is not extremely high. The temperature is just whatever it is. And so we know that those two lines are it terminate in the ground state of the atom. And that's an important clue to help us assign them. Why are there two of them? Well, we go back to our term symbols. The term symbols for cesium are the same as sodium. So although this seems hard, it's actually not that hard because we know the ground state is doublet S1 half. There's a single electron outside a closed shell. And we know that the two excited states that are the first two there are doublet P3 halves and doublet P1 half. And we know that they're split. And they're split by the spin orbit coupling. And we know that if we've got a very big positive charge on the cesium atom that that spin orbit coupling should be big. And if we happen to remember, oftentimes you do if you're under pressure, that the sodium splitting was 17. And we expect some splitting that's going to be quite a bit bigger than 17 when we have this enormous cesium charge moving around. And that, my goodness, that could make an enormous magnetic field at the side of the electron. And therefore, these resonance lines give us the following picture which I've shown here on slide 410. We have the two doublet P3 halves, doublet P1 half states. And they are 6P. The other mistake you can make is you might mark them as 7P. They are not. They're 6P. The 6P and 6S and cesium have a huge energy difference because the 6S can penetrate the other levels. And the 6P has a node at the nucleus. So the 6P electron hardly ever sees the cesium nucleus because it has a zero there. And when you look at the probability, you have to square it. So it's far less likely. The S has a finite probability of being right at the nucleus. And in fact, in some cases, S electrons actually get gobbled up by the nucleus. And that turns out to be a mechanism for radioactive decay called electron capture which is just like what it sounds. So we can draw these two transitions. One of them is 11178 for the lower one. The other one is 11732 wave numbers. And the difference between them is 554 wave numbers which is a very, very huge value. But maybe not surprising because cesium has a lot more charge than the sodium atom. Okay, I'm going to leave it there. And we're going to digest this problem and come back to it in the next lecture. Meanwhile, we have to think why do we have groups of 2 and groups of 3? I think you can understand where the groups of 2 are going to be coming from. And then how can we figure out what these random collection of lines that we get in wave numbers actually means in terms of the energy level? So we have to have some way to kind of sort them. And what we're going to use to sort them, as you'll see in the next lecture, is we're going to use this value of 554. 554 is going to be our flashlight in the dark. We're going to keep looking for things that differ by 554. If they do, that means two things. That means that these two states, these two P states are involved in the thing. And that whatever level came down to these P states was allowed to make a transition to both of them. So we have a common level and it's going to these P states. So we'll leave it there and when we come back next time, what we'll try to do is fill in all the other energy levels for this atom and make all the assignments as we were asked to do and then we'll also try to see if we can estimate the ionization potential. Of course, if you get a problem like this on an exam, which I actually did, you don't say, well, no, you can't estimate the ionization potential. You assume you can and then you figure out how to do it. So we'll come back to that next time.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: UCI Chem 131A covers principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. Index of Topics: 0:00:21 Quantum Numbers 0:02:40 Configurations 0:04:32 Term SYmbols 0:12:02 Holes and Electrons 0:13:47 Shapes of H Wavefunctions 0:17:39 Hydrogen Orbitals 0:18:41 Nodes 0:23:05 Spin-Orbit Coupling 0:24:09 Internal Magnetic Field 0:32:23 Other Alkali Metals 0:37:22 Cesium
10.5446/18892 (DOI)
Welcome back to Chemistry 131A. Today we're going to continue our exposition of atomic spectroscopy focusing on selection rules, coupling, and terms. Of course, atomic spectroscopy is the basic tool by which everybody learned that the atom was quantized and so it's of very central importance in this whole area of endeavor of quantum mechanics applied to small things. Various sets of transitions are observed. Here is energy level diagram for the hydrogen atom, not to scale because if you do it to scale, the levels crowd together too much at the top and it's very hard to see what's going on. There's a series called the Lyman series in which an electron jumps from the n equals 2 to n equals 1 or 3 to 1 or 4 to 1 and so forth. And then there's another series, the Ballmer series, that's in the visible range in which the electron jumps to the n equals 2 state and then there are other series and they have names according to who worked on them. The main thing to keep in mind here is that any change in the quantum number n in the energy is allowed in principle. The only thing that has to be satisfied is that there has to be conservation of energy and that's taken care of automatically because the photon is emitted with a quantum h nu which is exactly equal to the energy loss of the atom. So energy is conserved. In this diagram, the zero reference energy is at the top and all the energies quoted there are negative being more stable than a proton and an electron at rest at infinite separation. The selection rules are a little bit more complicated, however, and the first thing is that a photon has one unit of angular momentum, h bar. Angular momentum is conserved and so when the electron makes a transition there has to be a change by one unit of angular momentum. So that means that although we draw these things coming down to the one, in fact they're all coming from specific values of L. It's 2p to 1s, 3p to 1s, 4p to 1s and so forth and it's always p. We can't have 2s to 1s because then delta L is zero and that's not allowed. That doesn't conserve angular momentum and likewise we can't have 3d to 1s either because then delta L is equal to 2 and again that doesn't conserve angular momentum. Now if we have a different kind of transition or if more than one photon is emitted that's different because then, of course, that could be allowed because angular momentum could be conserved in a more complicated kind of process. Those kind of processes do happen all the time. They're usually just much slower than the single photon electric dipole allowed transition. If we have transitions to 2p in the Bombers series, for example, they could be from 3s or 3d. If we have transitions to the 2s state then they have to be again from 3p, 4p and so on. And in general, delta L is plus or minus 1 for electric dipole allowed transitions and these are generally the very strongest transitions that you see in the spectrum. Not always, however, if the atom is very heavy like mercury as we'll see in the next lecture, we could have a strong transition that doesn't seem to follow this rule. There are weaker transitions, magnetic dipole transitions and electric quadrupole transitions. We won't touch on those much but you should keep in mind that those can and do occur and therefore you mustn't just assume that this is an ironclad rule. When we use the terms allowed and forbidden in spectroscopy, what we're talking about really is strong and weak. Allowed is the light is green, forbidden is it's red but some people run red lights. Here's a compendium from the NIST database just to show you how accurate and fastidious these spectroscopists have been in cataloging these energy levels. This one has a different energy zero. In this case, energies are quoted from the 1S level of the hydrogen atom being zero and then up from there. But the amazing thing about these numbers is that they have six, seven, eight, and nine digits of accuracy. So you can see that they had to take a lot of precautions when they were doing these. And if you look closely, there are multiple entries. There's 2P and then there's 2S and there's just 2 and then there's 2P again and so forth. And we'll see in a second what these things mean. But you have to keep track of individual substates in these systems and oftentimes you put on a magnetic field to be able to tell which state is which and how they move around to keep track of them. And they go all the way up to 6H. So in the periodic table, we don't go beyond F. But if we go to excited states of atoms, it's easy to go very high and to have very high values of the angular momentum L. If you look closely, you find out that the first 2P state that's quoted is ever so slightly lower in energy than 2S, which is quite a surprise because in other elements, of course, 2S is lower than 2P. That's why lithium is 2S1 and not 2P1, for example. And we don't fill up the 2P until we get over to boron. But in hydrogen, it's kind of an anomaly, one of those 2P states is lower than the 2S. And that was extremely interesting to theoreticians how that could possibly be. And in studying that in great detail, they worked out important clues about actually the residual electromagnetic field that permeates our entire universe like the zero point energy of a harmonic oscillator. You can think of the electromagnetic field as some kind of electromagnetic oscillation. And it turns out that even in a vacuum at absolute zero, in this universe, apparently, we can't have nothing. There's still some zero point energy left over. And that can bounce the electron around. And then the question is, if it bounces a 2S electron around or bounces a 2P electron around, what's the difference? And if you actually follow through the calculation, it's actually very, very beautiful work to follow. And you can actually derive that the 2P should be lower. And we can use the measured energy. Once we have a series of these states, we can use the measured energy, if we're careful, to estimate the ionization potential, also called the ionization energy. And that's just an extrapolation to what the energy would be at n equals infinity. And that's very important to know what that number is. That's one of the main things that's listed in freshman chemistry books in the periodic table to try to account for various trends. If an electron is easy to ionize, and that means that that element's much more likely to give an electron up to another atom than an element that has an electron that's extremely hard to ionize, for example. Let's try a practice problem then and have a look at how we might tackle estimating the ionization energy. Practice problem 18. Consider the following unassigned, that means we don't know n or l, emission lines from a hydrogen arc, all of which terminate in a common level, assign them and estimate the ionization potential or the ionization energy, same thing, of hydrogen. While the hydrogen arc is just we put a big lightning strike, a voltage through, and we excite the atoms and then they emit light. And if they terminate in a common level, that means they're like the other diagram I drew. We have arrows coming down to this, to this, to this, and so forth, always to the same level. And that means the change in energy has to do with the upper level but not the lower level. Of course, in actual fact, you just get all kinds of lines and you have to be very clever to figure out which ones go with which and which go together. There are various tricks that were worked out to do that. So here are the numbers. They all have four or five digits in them, 1.8887 and so forth. And let's just take these four numbers, quoted in electron volts, and let's figure out what the ionization energy should be. The first thing is we have to figure out what common level they terminate in. And if we don't do that, then we aren't going to be able to figure out anything. And so the way we're going to do that is we first write this formula that delta E is equal to the Rydberg constant times 1 over n squared common level minus 1 over n k squared where nk is some higher number, higher up dropping down. We don't know what the common level is, however. And we don't know what the value of r is because it wasn't given. And on an exam, if you aren't given the value of r and a problem like this, that means you're supposed to figure it out without that. So let's try to do that. We don't know n common. We don't know r. We have some values of delta E. And we have to try to figure out if they fit this formula. A very bad way is to just start plugging things in and seeing what happens. A very good way is to proceed systematically and figure out if we can get a straight line plot. Chemists love straight lines. Before computers, straight lines were absolutely essential. And the reason why is that the human eye can distinguish a straight line from any curve. But the human eye cannot distinguish curves that are slightly different. Even if they're statistically significantly different, it's very hard to tell just by looking. And before computers, people had to plot things on paper often and look at them. And in order to tell what's what, if you could organize your equation so that it came into a straight line plot, then you could tell. You could tell if there was a systematic deviation, some kind of slight curvature one way or the other way. You could tell if there was scatter, noise, and so forth. And all those are very important to quantify when you're trying to estimate something. We can cast our equation into a straight line form by letting y equals to delta E, letting x equal 1 over n squared, and then letting the intercept b equals 1 over, rather, r over n common. And we can guess values of nk and see which look good. So we have to assume a value of n common, and then we guess values of nk all greater than that, and we see what looks good. Once the line is straight, then we can extract the slope, y equals b plus mx, and m in this case should be minus r, the Rydberg constant. Once the line is straight, then we can get r. Now the lowest common level could be n equals 1, so in the absence of any other information, we should assume that first and then work our way up. If the common level is very high, really, really high, this would be a disaster because we would take forever. But we know that it can't be that high because we see a couple of electron volts, and so we know it has to be one of the levels further down. If it's very high, then all the transitions would be very, very small numbers because we'd already be most of the way up to the ionization energy. So let's try n common equals 1, and in that case, we could have nk, the upper level, b2, 3, 4, and so forth. If some transitions are missing for some reason, we just didn't see them, they were faint, we made an error, some other reason, then this kind of exercise can get very frustrating because the points don't appear to fit, and if you have to start deleting values, then it's very tedious, but in any kind of exam problem, we'll never have something like that. But in the lab, for various reasons, it can be that some transitions are dark, something happens, there's another pathway where the electron can go, and it's very hard to see that, and sometimes it takes a lot of patience to be able to work out what's going on. So let's plot these. I drew up a table, I have the delta E values for y, and then I have the 1 over nk squared, and for 2, it's a quarter, for 3, it's a ninth, and so forth as we go up. And if we plot those, which I've done here, well, first of all, we notice that the slope is negative, excuse me, and that's, of course, exactly what we expect because the slope should be negative r, and r is a positive number, so that part's okay. And let's plot them now, and here I've drawn a plot. Now, there are four black dots on this plot on slide 370, and if you just draw a line through the four black dots by linear regression, you will get a pretty good fit. In other words, there is a line that misses them all, but is pretty good, and so you have to decide if that kind of line is going to be good enough for what you're doing, and in this case, it's not nearly good enough. And what I've tried to do is emphasize that by first drawing a line through the first two points on the left, the red line, that has a certain slope, and keep in mind that these points have very, very, very tiny errors. So although I've put these black dots so that you can see them on the slide, the points themselves are very, very accurately determined. We draw that red line, and then we compare that with the slope from the last two points, the blue line. We only have four points, so we have to be kind of careful, and we plot that, and when you do it that way, what you see is that the blue line has a much shallower negative slope than the red line, that it is quite a bit different, and so there, and if you took the center two points, it would have a slope in between, and what that means is that this is curved. It's not n equals, n comma equals 1, and this is not good enough, but you could easily assume it worked good enough if you're used to other kinds of data that isn't this well determined. In any case, we're going to discard that and say, well, n comma equals 1 doesn't seem to work. Let's go up to the next level. So this is not close enough. If we assume the next level up, n equals 2 for the common level, and then we have 3, 4, 5, and 6 for nk squared, then we get 1 ninth, 1 sixteenth, 1 twenty-fifth, 1 thirty-sixth. We put those values in and we get this table that I've shown on slide 371, and using this data, we get the plot on the next slide. And now the difference is it's dead on. So I put one blue line and boy, it goes right through all those points, perfectly through them, nice and straight. So there's quite a bit of difference between the two when you view it this way. And if you pick, I'll let you try it on your own. If you pick the common level equals 3 as the terminating level and do the same exercise and plot it, you will see that it is significantly curved again. But not by much. It's curved, but you have to have a critical eye in order to see it, and you have to understand how accurately these plots should come out. If you do a linear regression on this and you take the slope, then the slope M is equal to minus 13.5984 electron volts. That's R, and that's the ionization energy, because if we put N equals 1 in the formula and N equals infinity, the ionization energy is just R. And therefore, the ionization energy for hydrogen, according to this analysis with those four points, should be 13.5984 electron volts. If you go back to the NIST compendium of data, what you'll find out is that we did pretty well. That comes out very close to the exact value of the ionization energy for hydrogen. And doing that with just four points is pretty good, because we didn't assign very many of the transitions. It's especially important to note that it's not enough to fit the data if you don't do a statistical analysis of the fit. If you just get a fit, it can look great, and it can be completely wrong, and you have to do a statistical analysis based on what you think the errors in the data points might be and how well it fits. If you don't bother to do that, and most of the time people don't, they just do some fit, it looks good, they assume it's right, and they quote an answer, you can get really crossed up if you have systems that are in fact very subtle, the curvature subtle, and the data is very, very highly precise. So in these cases, if it's not right on, then your model isn't correct. Keep that in mind when you're doing these kinds of problems. Okay, now I want to talk about coupling schemes, and this has to do with why there were two values of 2P in the NIST database, there was one of them, and then another one, and you think, well, an electron can be in 2P, it can be in 2P, what's the difference? Is it 2PX2? No, it's not, that's not the difference. The difference is that the electron itself has a magnetic moment, and we have to take that into account when we have a nonzero value of L. Well, kind of, in order to talk this through, we'll have to assume a kind of a classical model of the atom, but this is just a heuristic way to kind of look at what's going on. We don't actually believe that something's orbiting around, but anyway, if L is not equal to zero, we can think that something is orbiting around in some sense. If I'm an electron and I'm orbiting around the proton, or the nucleus, if it's a bigger atom, and I go on the electron, so I'm on the electron, boy, is that a wild ride, what I see is I see the proton going around me, because I'm the fixed guy, and the proton's whizzing around me like that the other direction. But the proton has a charge, and so what I see is I see a charge whizzing around me in some kind of a pattern, like a circle or something, and that to me as the electron looks like a current loop. It looks like an electromagnet, but as an electron, I have an intrinsic bar magnet, the spin that we discovered from the Stern-Gerlach experiment, and therefore, my own bar magnet can either be aligned with this magnetic field that I appear to feel, or it can be the other way, and those two could have slightly different energy because there's some magnetic energy. We didn't take that into account in the Hamiltonian when we wrote it down, but in fact, we know that there is, and if we were more sophisticated, we could take it into account. And therefore, those two orientations there pretty much is the resolution of why there's two P levels. They're very close because this magnetic orientation of the electron either being aligned or against this field that the proton appears to induce is very tiny compared to the electrostatic interactions with the potential. And so, it's out in several digits out. But as we'll see, we'll do a calculation probably with sodium, I think this magnetic field that this nucleus appears to generate at the electron as measured by the difference in the energies is huge. And it's bigger than almost any magnet we can make in the lab. So, it has a very big effect because things are moving so fast and it's close by, so it has a very large effect on the electron compared to the kinds of magnetic fields that we can generate in the lab. So, quite a bit bigger than 1 Tesla. 1 Tesla would be the kind of magnetic field that if you went in for an MRI in a whole body scanner and you had a state of the art annular magnet, superconducting magnet, you might have 1 Tesla in there. And it's very much bigger, of course, than any kind of normal magnet that you would ever find. So, this kind of interaction between the spin magnetic moment and the orbital motion is called spin orbit coupling because it's a magnetic interaction between the apparent magnetic field of the nucleus and the intrinsic magnetic field of the spin. Whenever there is more than one electron, the electrons themselves can interact magnetically. They're little bar magnets. They can either align with each other or they can not align. If it turns out, we'll see it when we get to it that if they're in the same orbital, they have to be paired. But if they aren't in the same orbital, if there's an excited electron, there's another one here, then they can have any kind of orientation, either for or against. And if we have an open shell atom with more than one electron, then these spins can add up and we first add up all of the intrinsic bar magnets to give what's called the total spin angular momentum, S. And if there's electrons with nonzero L, then there's orbital angular momentum. If we have more than one of those, we add all those up, and I'll explain how in a second, to give the total orbital angular momentum big L. And then L and S, again magnetically, interact to give the total electronic angular momentum for the atom, which is given the symbol J, L and S, added together. If you take it one stage further, sometimes the nucleus, for example, proton, has a magnetic moment too. And it's been one half like the electron. And in that case, it can either be aligned or not aligned. And we have to add its angular momentum. It's magnetic interaction with J. And in that case, we have to invent a new letter and that new letter is F. We'll usually quit at J, but keep in mind that nuclei do have magnetic properties as well. Their spin is not always one half like the electron. A deuteron has spin one. Chlorine has spin three halves. And if you look very closely at electronic transitions, you actually have to be very careful to take into account the properties of the nucleus to explain what you see. If you try to leave that out, then they're missing pieces out of the puzzle and you don't appear to get the right answer. Here on slide 376 is an image adapted from Wikipedia which shows how you can take these two cones, a big cone for L. L is pointing somewhere. And another cone for S. And you can combine them to get a specific value of J. And then there are 2J plus 1 substates, magnetic substates of J, which are the projections onto the Z axis of the total angular momentum. How do we add these things up to get big L? Well, there's, you, in order to really do this correctly, you have to do a little bit of work with the theory of angular momentum. But as I will explain as we go along, it makes sense that if I've got two electrons, let's say, and they have orbital angular momentum L1 and L2, then I could add them up. They could be parallel and that would be L1 plus L2. So that would be the maximum if everything were co-linear, everything were the wind blowing from behind. And then they could be out of alignment. And because things are quantized, then the next value is L1 plus L2 minus 1. And the worst case is when they're completely misaligned. But the total angular momentum L has to be a positive number because it refers to L squared. And so we terminate the series at the absolute value of L1 minus L2. We say absolute value because we don't know whether L1 is bigger than L2 or less than L2. And so we just put the absolute value there to terminate the series. It terminates at a positive number, which could be zero because L1 and L2 could both be one, for example. This series is called the Klebsch-Gordon series. And the same algebra exactly applies to the total spin angular momentum, big S. Big S can be S1 plus S2 or S1 plus S2 minus 1, blah, blah, blah, down to the absolute value of S1 minus S2. And we can see this in a vector picture keeping in mind that the interaction between these guys is magnetic in nature. And so it's the magnetic little bar magnets that are doing the talking here. And then they're adding up in certain ways to give these overall angular momentum that we observe. Let's, for an example, let's just take two electrons and let's couple the two spin one-halves. We've got the two bar magnets here. Let's couple them together. They're interacting. How could they be? Well, there are four possibilities. They could be like this, like this, like that, or like that because each electron can just be up or down. So I've quoted that in a notation which you may see in a more advanced course and these are called KETs. But for our purposes, we don't need to know what they're called. They just have the arrows in them. They're either two up, up down, down up, or down down. Those are the four possibilities. And we have to see how these can give an overall spin, big S, that's not going to be one-half like the either spin of the electron because the Klebsch-Gordon series says, well, big S could be S1 plus S2, that's one, or it could be S1 plus S2 minus one, that's zero, and that's the absolute value of S1 minus S2 since they're both spin one-half. So we could only have big S be one or zero when we couple the spins together. So let's look at these then from a more pictorial view in a vector diagram. But here's a very nice picture on slide 379. One side in blue is called the singlet. We'll see why in a second. The other side in red is called the triplet. The singlet has one state. The triplet has three states. That part of it makes perfect sense. And we can see I've written these kets next to each orientation. For the triplet, they could be both parallel, both up. They could have one up and one down, but pointing the same direction like that, okay? So this would be an S equals one, S equals zero, and then they could both be pointing like this, that's S equals minus one, M sub S, excuse me. Those are the three values of the M quantum number that would go with an S equals one state, M sub S, big S equals plus one, zero, or minus one. And then the singlet has just M sub S equals zero, because S is zero, so that can be the only value of M sub S. And for that one, we draw it slightly differently. We draw it, instead of drawing it like this, we draw it like that. So we've got one up and the other pointing the other way. They add to zero, but they're kind of like the two guys in the Star Trek time tunnel, they're just stuck battling each other in this state, absolutely anti-symmetric. Instead of calling one of them up, down, and the other one down, up, you may notice that I have this root two over two down, up plus up, down for the triplet, and root two over two up, down minus down, up for the singlet. Why is that? That we have to have them. Well, we know of course that since the electron can go through both slits, it's perfectly feasible for quantum systems to decide that they want to exist in a superposition of what we might think of as the most concrete things, namely, well, it's either down, up or it's up, down. Well, not in a quantum system, it could be 50% of that and 50% of the other one together. That could be the solution. We see then we've got four states and they break apart into three and one, and that's a common pattern because the odd numbers add up to a perfect square. And so if we have a certain angular momentum choice, three here and three here, we get nine and we can write that as a sum of one, three and five. And that's basically what the Klebsch-Gordon series says. It turns out that neither of the states up, down or down, up will do and the reason why is because these guys either in the triplet or the singlet or in other systems, they have to have the same stripes. They have to have the same characteristics with respect to swapping the particles. The particles are identical, so which one we call one and two is unimportant. But if we have two of them up and we swap them, we get the same thing. And if we have two of them down and we swap them, we get the same thing. But if we then say, well, I think the middle one then for the triplet is up, down and we swap them, we get a new guy, we get the other one, which wasn't part of the mix and we can't have that. We have to, when we swap them, we have to get the same thing because it's part of the same series. It has to have the same symmetry as the other ones. Otherwise it doesn't work. And therefore, we have to take a combination between up, down and down, up such that when we swap them over, we get the same thing as the M equals zero state of the triplet. It's, unless you've seen how to do this, it's not, not quite so easy to figure out how you might do it. But let's explore this symmetry by artificially coloring the arrows, red and blue. That way we'll keep track of them. When we swap them, we'll see what we get in terms of whether we get the same thing or not. At the top here on slide 381, I've shown a picture with the coloring in. We start with, we know we can't have one of them because it switches to the other. So we know we've got to have both of them and we know we've got to have 50% of each. Let's forget about the root 2 over 2. That's the normalization constant just to keep the probability of being in some state equal to 1. And let's just take the combination of adding them. Let's take up in red, down in blue, plus down in red, up in blue. And then let's swap the positions of the two arrows. What we get then in the first one, where we're up in red and down in blue, if we swap them, we get down in blue, up in red. Now in actual fact, the red and blue don't matter, but it lets us keep track of it. So we get down up. The other one, which was down in red, up in blue, we get up in blue, down in red. And then if we just change the order, we find we get up in blue, down in red, plus down in blue, up in red. That's what we started with except the colors are swapped. But the colors don't matter because the electrons are identical particles. We're just coloring them so that we can keep track of what we're doing, otherwise we can't tell what we've swapped. And the key is that we get plus one. We get the same thing. The eigenvalue for this swapping is plus one. So that's kind of a parity value. Plus one for swapping this way, plus one for this one, plus one for swapping that way, they all go together. All three of them are birds of a feather. If one of them is plus, then the other one to be orthogonal can't be plus as well. And therefore it has to be minus. And if we take that combination, which I've shown in the second line here, and we take up down, minus down up, and we swap them, we get down up, minus up down. And that's equal to minus one times what we had before, because of the fact that they're opposite. And what that means then is that the singlet has different symmetry than the triplet. And so although they both have M equals zero states, the states are different because one is symmetric under exchange and the other is anti-symmetric under exchange. In more complex systems, you have to look at the symmetry of each of the states and decide if it fits in or not. And there are well-documented procedures for doing exactly that if you have to. So whether you're symmetric or anti-symmetric under exchange is a very important property. It's not just trivia. Okay, suppose then we have these values. We've seen we can take S2S1 half, spin one half particles. We can get S equals one, S equals zero. We can do the same thing with L, and we can follow that through the same way. Now we've got big L and big S. How do we know what values of J will result? Big L is something, could be two. Big S is something, could be one or three halves, depending how many particles we have. And we want to figure out what values the total angular momentum of the atom J can be. Well, the answer is it's pretty much the same thing again because angular momentum always adds in exactly the same way. Doesn't really matter what it is. Well, if L and S happen to align, that's the maximum of value of J, L plus S. And then if they don't, then we decrement by one quantum, and we have L plus S minus one and so forth. And we go down to, again, the absolute value of L minus S because we don't know whether L or S is bigger. But we know that J referring to the square of the angular momentum or the square root of the square of the angular momentum, if you like, has to be a positive thing. And so we can't have anything go negative. And therefore, we get another Klebsch-Gordon series, L plus S, L plus S minus one and so forth. And therefore, what this means is that if you have a certain configuration, an electronic configuration of an atom that's open-shell that has several electrons, depending on the details of how these magnetic moments are interacting, and since these energies are small, then if I have something like a blast, like an arc, an electric arc or something like that, they're all going to be there. That's not like one of them is way high compared to the others. If there's enough energy to populate the excited state, there's probably enough energy to populate all the possibilities randomly, statistically, just depending on how many possibilities there are for each state. And therefore, we need some way to keep track of these things in order to figure out what kinds of emission lines we're going to see in the spectrum and to assign it. And the way we do that is with something called a term symbol. The term symbol summarizes not only what state the atom is in, but tells you what L, S, and J are. And the term symbol has the structure 2S plus 1 on the left, big L, which is like a letter like S, P, and so forth, just like for the electron itself, little S, little P. This is big L, the total angular momentum, orbital angular momentum. And then J as a right subscript to tell you which way S and L are adding up to give the result for the total angular momentum of the atom. And in order to figure out what kinds of transitions you're going to see, you have to know what the term symbol is. That's by far the best and most concise way to look at things. And therefore, you have to know what these are. 2S plus 1 is called the multiplicity. And if S is 0, it's called singlet. Because there's one line, one state there, like we saw with the S equals 0 for the two electrons. S equals 1 is called triplet. S equals 1 half, if we have that. That's called doublet and so on. We can have all the way up. And let's try an example with this then and see how we might use this symbol to figure out. So let's take, here's practice problem 19. Let's look at the sodium emission spectrum. The ground state of sodium is got a 3S electron outside a closed shell. The first excited state is 3P because in the sodium electron, 3P and 3S are quite a bit far apart. And so that turns out to be a transition that has visible light. It's not very tiny transition energy like radio waves. So we can see that easily in the sodium spectrum. The question is, if we have the electron in the 3P state outside a closed shell, what terms can arise? In other words, what are the possibilities for these term symbols? Which way can L, S and J add up? Well, for the P state, little L is 1. There's one electron and therefore big L is 1 because the Klebsch-Gordon series terminates. Big L is little L in this case because there's just one electron. Why don't we include all the other electrons in sodium? They're inside a closed shell. And inside a closed shell, the orbital angular momentum is zero and the spin angular momentum is zero. And so inside a closed shell, you just forget about it, throw it away. You don't want to include all those or you'll spend the rest of your life playing around. And there's one electron. So S is 1 half and therefore the two possible values of J are 1 plus a half and 1 minus a half, absolute value. That's it. Two possible values of J are 3 halves and 1 half and therefore the two terms are 2S plus 1, 2 times a half plus 1 is 2, that's doublet P, 3 halves and doublet P, 1 half. Those are the two terms that can arise. So as I said, closed shells ignore them. They have a total angular momentum zero. The two terms have close but not identical energy. Both of them, it turns out, we'll see, can emit a photon to the ground state, 3S. And therefore we get two closely spaced lines which are called the sodium D lines, which are quite famous because they were studied very early. In fact, even before the electron, it was figured out that the electron existed as a particle by Thompson. The year before, Zaman was looking at sodium and putting on magnetic fields and looking at how these lines changed and working out a ton of information from these yellow lines, which were very easy to excite, for example, with sodium vapor with a flame. And that's, of course, the sodium, the flame test to see what kind of element it is. You put a flame on and you look at the color and you can tell if it's yellow, it's sodium. These are also the characteristic yellow color of the fog lamps you see down toward the beach in places like San Diego or at least you used to. They might not be so energy efficient now. But the yellow color was thought to be much better when you had fog because if you have white light, sometimes what you see with very strong white light is you just see the fog. You don't see through the fog. You instead light up the little droplets and you get a lot of reflection from the fog so you don't see the pedestrian. Whereas if you have the yellow color without the white, which is why you use the sodium because you have these two lines, then you just see the pedestrian and then you put on the brakes. The term symbol gives us a really quick way to tell whether a transition is quote allowed or forbidden according to the electric dipole selection rules. In light atoms, L and S are still pretty good quantum numbers. The reason why they aren't perfect quantum numbers is that L refers to orbital angular momentum that came from hydrogen. In hydrogen, everything is spherically symmetric because there's that one proton and one electron. But in other atoms, there are other electrons and they muck things up. And what that means is that L is not quite so good because things are wobbling a little. And so although, yeah, it's going around, but there's a little bit of wobble and then there's S in there and these magnetic interactions with more than one particle. And so they're 90% good but not 100% good anymore and that means that you can get some forbidden transitions and if you look closely, you'll see some weak things that you can explain by that. But one thing we know for sure if we're considering it's the electric part of the light wave that's moving the electron around. The electric part of the light wave can't change the magnetic bar magnet of the electron. And therefore S, big S, can't change in the transition. So delta big S has to be zero. And because the photon has one unit of angular momentum, delta big L has to be plus or minus one. And putting those together, we can get the delta J is zero or plus or minus one. However, if J starts at zero, we can't terminate a J equals zero because then it's impossible to satisfy the delta L selection rule as well. So in that case, we get shut down. So those are the electric dipole selection rules in terms of the term symbol. Delta S is zero, delta L plus or minus one, and delta J is zero plus or minus one. We'll explore in the next lecture how we can use this notation to analyze some of these spectra.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:00:41 Atomic Spectroscopy 0:09:08 Emission Spectra 0:21:41 Coupling Schemes 0:26:17 LS Coupling 0:29:42 Adding Angular Momenta 0:31:47 Vector Model for Coupling 0:38:40 Symmetry Considerations 0:41:48 LS Coupling 0:44:28 Term Symbols 0:50:32 Selection Rules
10.5446/18890 (DOI)
Welcome back to Chemistry 131A. Today we're going to talk about spin, the vector model, and begin to talk about hydrogen atoms, a subject that we'll continue on for a couple of lectures. Recall that when Stern and Gerlach did their measurement, there were two bands that were observed rather than a continuum of silver atoms distributed. And that was interpreted to mean that there were two possible states for the angular momentum of the magnetic moment, the intrinsic magnetic moment of the electron. And that would correspond to M sub S equals to plus one half or minus one half. And the interesting thing is that it was a half rather than integer, which made it seem fairly mysterious at the time. In the vector model of angular momentum, we visualize the angular momentum vector as lying somewhere on a cone. And the Z component we imagine that we've measured and that has a definite value. And we've measured the total angular momentum, that also has a definite value. But the X and Y values are indeterminate for reasons that we'll see in a minute. For an electron then, there are just two such orientations of this particular cone. And we tend to just call them up and down and recall that we don't actually physically think anything is spinning or rotating or there's a little planetary model inside the electron. We have no evidence at all for anything like that. This is just an intrinsic property that there is a magnetic moment and it behaves as if there were an angular momentum and it had the two projections, M sub S equals plus or minus one half. We can do spectroscopy, we can do magnetic resonance, we can make transitions between these two magnetic states. And a lot of information about molecules and photosynthetic systems and various other kinds of things have been gleaned by actually seeing if there is an unpaired electron, what the energy difference is between the two magnetic states and actually getting a spectrum of the possible difference and seeing how that spectrum may change if we irradiate the sample with light or do other things to influence the molecular structure. So for a spin one half, we've got a picture like this of these two cones, one cone up, one cone down, these are the two possibilities and there's a definite value of the z component of the angular momentum, h bar over two and the other possibilities minus h bar over two. For the x and y components though, they're indeterminate and that's why we just draw this cone because that's meant to show that the x and y components could be anywhere on this cone. We'll see why in a second. For a higher angular momentum, we would have more cones and we'll see that in a second. What about the usual conditions that the wave function, whatever it is, be single valued and have at least a second derivative? If we put m instead of e to the i m phi, if we imagine some kind of thing there with m over two, then the problem is that it changes sign. But keep in mind that spin is different because when we derive that e to the i m phi, we were actually had a physical particle on a physical ring and we got those wave functions from the boundary conditions there. But here, we have an observation of a magnetic moment but we don't necessarily know that it corresponds to anything like that. There is no spatial component to spin. It exists just as the magnetic moment of the electron but there's nothing moving around in space that we can ascertain. And the spin part can actually make some calculations quite difficult because, for example, if you have a molecule with an unpaired electron, then it may be that whether the spin is up or down changes the spatial electron density. Why? Because there are a lot of other magnetic particles around the molecule, nuclei, other electrons and so on. And it may be that if the spin has one orientation, the magnetic moment, that it hangs out more over here and if the spin has opposite orientation, that it ends up in a different part of the molecule. And that means that if you're trying to keep track of chemical reactions where electrons can become unpaired or various things can happen, sometimes your calculation gets much more complicated than you would like because you have to know something about the spin or average over it or figure out what's going on. The spin one-half, if you want to think about it, is rather like a Mobius strip in that if I put a twist in a piece of paper, then rather than coming around like a ring and pointing the same way, if I put just half a twist and tape it together, when I come around, I'm actually pointing inside rather than outside and when I come around again, I'm pointing outside, so the wave function repeats when you rotate by 4 pi, that's so-called spinner behavior and usually it's extremely hard to observe but if you have a reference state and then you have a state that you're rotating with respect to the reference and you can do this in magnetic resonance experiments, you can actually see that when you apply a pulse that you actually see the thing change sine if you go around by 2 pi and if you go around by 4 pi, it comes back to the same sine again. Here's a picture for the spatial part. Here are the five components of M sub L for an L equals 2 state which we'll see is called a D orbital and again the cones have to match and the length of L has to also match because we imagine we've measured L, so we know L times L plus 1 and then we've measured the Z component so we know that and you get this series of cones and again the LX and LY components are completely indeterminate but you get these pretty pictures and this is the so-called vector model for angular momentum. We'll see in atomic spectroscopy in a couple of lectures that keeping track of the angular momentum of the electron depending what value of L it has and keeping track of the spin depending whether it's up or down is very important for keeping track of where the spectral lines will appear and to assign the spectrum. To assign the spectrum means that we know the energy level where the electron started and then it falls down to a lower energy level and emits light of a certain frequency that comes out someplace and we know then the energy levels and we know as many quantum numbers about each energy level as it's possible to know. If we know that then we say that we've assigned the spectrum if we can match up everything and then we can infer a lot about the structure of the atom by where these lines actually happen to be like the sodium D lines for example that give the characteristic yellow color of a sodium vapor lamp. Why do we have this uncertainty? Well it turns out that this is another manifestation of the uncertainty principle and at this point rather than just talking about delta P delta X we can make a connection with a much more general idea namely if we're going to make a measurement of one thing and another thing then if we make the measurements in the reverse order the question is whether we get the same result or not. If we get the same result by making the measurements the other way round then we imagine that these measurements are compatible and it's okay. If on the other hand we get a different result like we saw with the coin where if we wanted to see if it was heads or tails or we wanted to see how thick it was that we ended up getting different results at random for whether it was heads or tails after we measured the thickness then they're incompatible and this incompatibility is actually coded in the operators themselves so we don't have to guess about it. We have a mathematical way to analyze what the operators are doing when we can figure out. It turns out that the square of the angular momentum L times L plus 1 or S times S plus 1 and the Z component or any other component but when we're picking a component we pick Z by convention they're compatible. They can both be determined no problem at all no problem with the coin on the edge but if we try to in addition determine in addition to LZ and L squared if we go further and say well I'd like to also know LX and LY as well then things get filed up then what happens is if we measure one of those then LZ gets changed at random and if we go back and measure LZ the other one gets changed at random and no matter how many times we go back and forth we can't get a consistent result because the result we get depends on the order that we measure it in and further more it's random. It doesn't depend like a flip flop or anything like that something orderly that we can predict it's just a random value and the key is whether the operators commute and the commutator really is you put one operator on one side and the other one behind it and then you do it the reverse way you put this one on this side and this one behind it and you put them both on the wave function and then you subtract them and if you subtract them and it comes to zero then they are compatible and you can measure them both as accurately as you want and if they do not then they are incompatible and that means that making one measurement will destroy some of the information that you gleaned from the previous measurement. Let's just take our operators for position then and momentum. Position remember the x hat operator was just to multiply the wave function by x and the p hat x operator was to take the derivative minus ih bar times the derivative with respect to x. Suppose we want to figure out the commutator then we write the square bracket and we write x comma p and what that means is that means take x px minus px x. So let's do that. The first term is x and then on the right hand side minus ih bar d by dx that's p hat x minus the opposite order which is minus ih bar d by dx of x and what we do then is we put our wave function on the far right of this. We don't insert it where it might seem to go next to the derivative past the x we put it on the far right hand side of these operator equations. Always remember that don't start inserting things in the middle keep them to the right because the way operators work is left to right by convention. Let's apply this commutator to some wave function psi of x that's completely arbitrary we don't care what it is as long as it follows our rules for being continuous and having derivatives and so on. The first term then is quite easy because it's x times minus ih bar times the derivative of psi with respect to x whatever that may be. The second term is now plus ih bar because I have minus minus ih bar times the derivative of the product x psi and so that means I have to take the derivative of x and then times psi plus the derivative of psi times x and if I do that then I find that two of the terms vanish the minus ih bar x psi dx cancels with the plus ih bar x d psi dx but I end up with that third term because of the rules for taking the derivative which is ih bar psi and so the commutator x hat p hat x applied to the wave function psi returns a number which isn't real but it has ih bar in it ih bar psi. Therefore since psi is arbitrary what we say is that the commutator this relation has nothing to do with psi so at the end we throw psi out and we have an operator relation the operator x hat comma p hat x is equal to ih bar and the key is it's not zero. Since it's not zero we conclude that we cannot measure x position and x momentum simultaneously to arbitrary precision and I've written this relationship here on slide 312 this is just an operator relationship you can think of as ih bar times the one operator one hat where the one operator just multiplies psi by the number one and anytime in fact the commutator of two Hermitian operators is nonzero doesn't matter so much what it is it gives you a hint about how it's going to behave but the key thing is is it zero or is it not zero if it's not zero they are not compatible if it is zero they are and in fact we can take the three components of angular momentum L hat z that's remember that these components are just r cross p and that where we take the vector cross product so we can take L hat z that's the angular momentum operator for the z component that's x hat p y minus y hat p x and L x is y hat p z minus z hat p y and L y is z hat p x minus x hat p z if we take these because we now know that x and p x don't commute do y and p x commute yes they do because there's no when I take the derivative with respect to y of x times the wave function it doesn't I don't need to use the product rule because these are partial derivatives and so if there's no y there then I don't bother taking the product rule x is treated as a constant so I can measure the momentum one way and the position another way in principle as accurately as I like I just can't measure them both and I certainly can't measure all three to localize the particle plus no its trajectory if you work this out you can verify on your own that in fact the commutation relations between angular momentum are a little bit more interesting the commutation between position and momentum is just a number an imaginary number but still just a number in fact if you take the commutator of L x with L y what you get is I h bar times L z you not only get a number back but you get another operator and if you take the commutator of L z with L x you get I h bar L y and if you take the commutator with L y and L z you get I h bar L x and when you look at them these things are in a cyclic permutation so if you start with the three operators like this you're allowed to pull this one here put this one here and move that one there and you can do it again and you always get the same result that's called a cyclic permutation and it's very interesting that these cyclic permutations look very much like what a rotation would look like we're going around like this and every time we take the commutator we go around to the other component and in fact there's a very deep connection between the structure of these commutators and the fact that these are in fact referring to a rotation of something that they are angular momentum. Therefore for the spin angular momentum we don't have r cross p we just have this behavior but now all we have to say is we propose that there is such an operator as the spin angular momentum we know that because we have this magnetic moment we know it has the value one half for the spin and that there are two orientations and we just propose that there are three operators we call them S x, S y and S c and that they follow the same exact rules they follow the same exact rules as the orbital angular momentum and that in fact that it's not really the derivatives and so forth that are important but what's more important is whether you follow this particular rule or not if you do you're going to behave like an angular momentum and that's kind of an interesting way to formulate things and in fact as I remark here in more advanced courses that you take on quantum mechanics we just simply propose that there exists there are operators and they follow certain commutation relations and we don't need to specify what they are in concrete terms we don't have to say well this operator is a derivative with respect to x or anything like that we just say x hat p hat x is equal to i h bar anything that follows that is going to behave like position momentum we don't have to write derivatives and various things like that why would we want to do that well there's a kind of a minimalist approach to the theory the less things you have to actually assume that are detailed than the stronger your theory is because it doesn't depend on whether those assumptions are correct if you didn't make them for example implicit in taking the derivative is sort of the idea that space is a continuous thing that there is such a thing as dx and that there's an infinitesimal unit of space we're taking the derivative with respect to it and using these nice stylized functions to represent reality but what if it isn't what if space is digital or sort of like a billboard when you get up close you see little dots and only when you get back away does it look like a continuous picture or a photograph well then you'd have to reformulate your whole theory if it depended on all that kind of thing whereas if you just say well I have got the commutators and that's it you don't have to do anything different because you come up with something else that has the same commutator and then everything else in the theory still works whereas if everything in the theory depends on this linchpin of well there is a derivative and all this stuff that isn't quite true then you're in trouble you have to start over. Okay we're going to talk in a couple lectures about atomic spectroscopy and in all kinds of spectroscopy and rotational spectroscopy and vibrational spectroscopy and atomic spectroscopy you run into this unit of energy that seems a bit odd it's called the wave number and it has a very strange unit because the unit is centimeters to the minus one and that doesn't seem like a unit of energy at all and then why is it centimeters and so I want to take a little bit of an aside here to go into why we use this unit. The main reason is that a wave number is about the right size there's a reason why we have measurements like the foot and stuff like that because it's about the right size to measure things that we encounter in real life and when we're writing down energies and we're writing down transitions and so forth if we express the energy in wave numbers we get numbers like one and ten and a hundred and maybe a thousand but we don't get numbers like ten to the minus thirty four or ten to the plus twenty and chemists don't like big numbers like that because it makes it harder to understand things and talk about it and relate them it's always a little bit more daunting when we've got huge and tiny numbers and we're dealing with them. For atomic and molecular spectroscopy then the relationship between the energy and the wave number is E is equal to H nu that we knew from the photon and we can write that in terms of the speed of light because C the speed of light is the wavelength times the frequency you can see that because if you've got a certain wavelength and the frequency changes then that's how far it moved so that's wavelength times frequency is in fact the velocity of the wave which for light is always C in a vacuum so for the frequency nu we substitute C upon lambda and then we set one over lambda is equal to nu bar and we say that the energy is H C times nu bar and nu bar is the wave number that's what we're quoting when we do that and one wave number or one centimeter to the minus one corresponds to about 30 giga hertz and the reason why is that three times ten to the ten is how fast light goes in centimeters per second so that's the conversion then between wave numbers and hertz and it's much easier to visualize one wave number one unit as a wave number than 30 giga hertz three times ten to the ten per second as a frequency they are the same thing and we can measure the thermal energy and we can see that the wave number is makes sense in that regard so as you may have learned in freshman chemistry the random thermal energy that's around and can be accessed by system at some temperature T in Kelvin of course always in Kelvin is KT that's about the range K is Boltzmann's constant let's figure out then how many wave numbers this is so we take KT let's take 25 Celsius 298.15 Kelvin put in all the constants K 1.38 times 10 minus 23 joules per Kelvin 298.15 Kelvin H 10 to the minus 34 joules times seconds and then for C all we have to do is put it in centimeters per second because we see that the joules go away the per second goes away the Kelvin's go away and so if we just put C in centimeters per second rather than meters per second we get the answer in inverse centimeters and what we find then is random thermal motion is about 207 wave numbers therefore if we've got energy levels that are spaced less than 207 wave numbers then just things knocking around can knock stuff up into these higher levels because there's plenty of energy around to do that you can think of the 207 as if there's lots of hundred dollar bills on the ground just lying around randomly everywhere then it's pretty easy to buy a cup of coffee if that's the way it is if it's very cold there's no energy there's no money anywhere there's a couple of cents there's no way you're going to buy a cup of coffee so you can't make any kind of molecular transition in that case. Electronic excitation of atoms is usually much higher than this number 207.2 wave numbers and that makes sense because materials only glow let's say red hot so we know if they're glowing they're emitting light that electrons are making transitions in the material but that only happens if it's red hot if it's actually very very hot it's not going to happen for any kind of metal or material like that at 205 Celsius they don't emit light and so that explains why the numbers we're going to get for electronic transitions in atoms are very very much higher numbers than that and that also explains for example why infrared light just heat as long as you don't get too much of it because the photons are low doesn't do anything to chemical bonds and neither does the radio waves or microwaves from your cell phone regardless of what people say to the contrary who don't know anything about molecular structure or radiation whereas UV light if we figure out how many wave numbers that has that's different that has the potential to break some bonds and that's why we can use it to sterilize water when we go camping because if we stick in an intense LED UV source it breaks bonds in all the bacteria that are swimming around in the water and of course they're very minute amounts but the key is when you drink it are they alive and then they start doubling and quadrupling and so forth and make you very sick or are they there but they're dead because their bonds have been broken by UV light same thing with your skin you get too much you get sunburned it's very bad. Now in atomic spectroscopy it was experiment first and then theory next in fact most of the time it's experiment first and then theory comes in afterwards once you know the answer then you can appear to be quite smart it's only very rarely let's say with the observation of mercury by prediction by special relativity and a few cases like that where the prediction was made and then the experiment was done and it actually agreed with the prediction because the theory was so deep and in this case a dedicated spectroscopist Rydberg a Swedish man found that in 1890 that the lines the emission lines from a hydrogen arc so what's a hydrogen arc well I can take hydrogen gas that's a diatomic and I can like put a lightning strike through it boom just high voltage and two things will happen the bond will break I got so much energy there I got a cascade of electrons slamming in knocking things up and then the isolated hydrogen atoms then will have electrons in very high orbitals a random number and depending how big I make the arc and then they start emitting light and then I record all the light they emit and I have a look at it and quite how he defined this is a mystery but he must have been very very good with numbers and some people are very good with things like that and others are not so strong and he looked at where these lines were and he worked out that they followed this relationship that the wave number of the line appeared to be a constant times the difference in 1 over n2 squared where n2 is an integer minus 1 over n1 squared for example it could be 2 squared is 4 and 1 squared is 1 and so you get that one and then there's 3 and 2 and so forth and he saw all these things and this number R sub H the Rydberg constant referred to hydrogen was this number 109,677 wave numbers why it followed that was a mystery but it was interesting that it followed that and what was even more interesting especially now quantization quantum mechanics not continuum mechanics is that these numbers were integers that was a crucial observation because now this is a simple thing hydrogen one proton one electron and it's giving you this clue that there are numbers and they're integers that are dictating where the light comes out if you just take differences between them you can work all these states out it would make sense then if you get all these combinations 1 over n squared minus 1 over m squared that the energy levels themselves go like 1 over n squared and then could be 1 that would be down at the bottom 2, 3 and so forth and maybe up to infinity and the zero of reference here for this kind of atomic system and all these systems in general is that you take the electron and the proton and you move them apart so they're infinitely far apart and you leave them at rest if they're infinitely far apart their electrostatic energy is zero and if they're at rest their kinetic energy is zero that imaginary reference state you call zero and then you compare what the energy is as the atom and of course the energy is going to be negative in that case because the way to think about it is if I'm stuck down here where they've attracted each other and I want to pry them back out to here I've got to put in positive energy so if I go the other way from the reference I had to go down in energy I couldn't go up in energy or I'd be getting energy back when I went off to infinity therefore we can write the Hamiltonian and the wave function for the hydrogen atom we've got the Schrodinger equation time independent Schrodinger equation we've got the clue from the experiment as to what happened and so we'll write down a Hamiltonian and it has the kinetic energy of the nucleus which in this case is a single proton it's got the kinetic energy of the electron and it's got the potential energy between them so I've written this here H hat is minus H bar squared over two times the mass of the electron times the second derivative with respect to where the electron is minus H bar squared over two times the mass of the nucleus times the second derivative with respect to where the nucleus is minus E squared over 4 pi epsilon not r where r is a special thing it's the distance between them now suppose we solve this just written like this what would the wave function depend on well it would depend at the minimum on six coordinates because it would depend on where the electron is in x y and z and then it would depend on where the nucleus is in x y and z that's six things that I have to put in to get a value for the wave function now how do I plot that there's no way I can plot that because I can't see what I'm doing it's as if I make a value I have six axes all I can do is make like contour plots and make fixed certain values and then cut through just like we do with a mountain range when we make a contour plot we turn a three dimensional thing into a map that's flat and we draw contours on it but here I'd have to fix a bunch of these guys and then let two of them vary and then plot the wave function and it's really you're really blinded it's like being stuck between two tall buildings and you just can't see where on earth you are in the city because you're just going down this narrow alley and so this is completely unworkable and as you get more particles involved the real wave function itself is just impossible to understand what it actually looks like and it's extremely hard to calculate what it is as well since the wave function gives us the most information possible most of the time we don't need to know everything about it we just need to know enough to be able to calculate what we want to calculate and there's a trick here I'd like to go through to show you how we get rid of three of these coordinates so that we just have three things and then we can make these plots in 3D space where we use transparency and shapes to indicate a certain percentage chance that the electron is in a certain area. So the first thing we do is our potential depends on just the difference in position between the two things so that's a clue that trying to specify the exact position of the nucleus in XYZ and the electron in XYZ is kind of a waste of time and I need a trick so that what I'm looking at is the nucleus and then I can pretend the nucleus is at zero and then it's out of the picture and the way to do that is to change our coordinates so that one system is the center of mass one system is wherever the proton is and the electron is I take their center of mass which is very close to the proton and I call that big M and the other part is called the reduced mass which should be familiar from vibrational problems and that's given the symbol usually mu and that's the mass of the nucleus times the mass of the electron over the mass of the nucleus plus the mass of the electron and when the nucleus is heavy mu is pretty much the mass of the electron. The mass big M the center of mass just drifts around as a free particle because there is no potential that refers to the coordinates of the center of mass the potential only refers to the difference between the two particles not to where they are in the universe as the center of mass and we know the solution for that that's just our plane waves e to the I px upon h bar or I've written it here as psi for this big M the center of mass is equal to some constant times e to the minus I k dot r and so that's done now all we have to do is set a derivative with respect to that that's all kinetic energy all the center of mass can have is kinetic energy it can't ever have any potential energy in this kind of problem the other part which separates only depends on the difference between the two particles not not their absolute coordinates and we can write that then as minus h bar squared over two times mu times del squared where now del is referring to the difference just the difference what's the difference between x between the nucleus and the electron y and so forth and then this is a dodge usually we aren't interested in the center of mass motion we know the things going to drift around in fact we try to design experiments where things are either very cold or very quiet so that things aren't drifting around too much because if things drift around we get slight frequency shifts just like a train whistle coming toward you or going away and usually if we're trying to do an accurate measurement we don't want that stuff we'd like to know what the train whistle is when the train is stationary mathematically though this lets us just forget about the center of mass we just take the coordinate of the nucleus and we just artificially totally artificially assume it's fixed we know it can't be fixed because of the uncertainty principle but we just assume that it is for the purpose of the calculation and then if we're really doing something detailed we add the center of mass calculation and later now then this is great because we're down to one derivative and it's a three dimensional problem but the potential has spherical symmetry and therefore we just imagine the nucleus at zero zero zero the electron then is the spatial variable and we just go forward and as I remarked of course in real fact the nucleus can't literally be fixed we go ahead then we know how to transform the d by dx squared d by dy squared so forth into spherical polar coordinates we get the partial derivative the second partial derivative with respect to r plus two over r times the derivative with respect to r plus one over r squared times the same thing we had before for the particle on a sphere recall that before we got to this point and then we just said well r is fixed let's look at theta and phi now we've got r in there but we've got the other thing where we already know what that part does and so all we have to really figure out is what does this part r do where instead of being fixed it has this electrostatic potential between the two particles that only depends on r but not on theta and phi so as I remarked we've solved that part of that problem on a sphere no big deal and all we do then is we just assume like we always did that the wave function whatever it is is a product and it's a part in r that I'm going to call big r of r and it's a part in theta and phi that I'm just going to call y because that's the conventional term for the spherical harmonics ylm and you can verify on your own as an exercise that if you make this substitution into the Schrodinger equation that it will separate you'll get two parts one part that depends only on theta and phi the other part depends only on r therefore you can do them separately and you get the following two equations you get minus h bar squared over 2 mu y times what we call lambda squared which was all the sine theta and d by d phi times y is a constant and the other part we get is minus h bar squared over 2 mu big r times r squared d squared big r dr squared plus 2 r d big r dr plus v r squared minus e r squared and that should be a minus whatever the other one constant was so that the two of them together add up to zero but they're separately constants once you pick one as a number the other is the same number with the opposite sign we already know the top part is just the particle on a sphere so good good thing we did that we know that h bar minus h bar squared over 2 mu times lambda squared y is equal to h bar squared over 2 mu times l times l plus 1 times y and so the constant is h bar squared over 2 mu times l times l plus 1 that's the constant we need and the second equation we can simplify instead of dealing with big r let's make a substitution little u of r is equal to r times big r and let's see if we can let's do this together as a practice problem now and see if we can come to some conclusion so here's practice problem 15 show that the differential equation simplifies substantially if we express it in terms of u rather than in terms of big r what is the connection with the classical case okay well we're going to write u is equal to little r times big r and we're going to have to use the product rule to take derivatives so that being the case du dr is r d big r dr plus r times the derivative of r with respect to r which is 1 and we can take the second derivative we take the derivative of the derivative and we just follow the same thing the first thing is r d squared big r dr squared plus dr over little dr plus dr over little dr and then we get two terms out the end r d squared big r dr squared plus 2 times the derivative of big r with respect to r then we just solve for the second derivative of big r and what we get is 1 over r d squared u dr squared minus 2 dr d big r dr and if we substitute that into our original differential equation substitute the second derivative of r then it turns out that minus 2 d big r d little r cancels the other term that I had perfectly and we find the following then that we've got the sum of these terms plus v r squared minus e r squared is equal to the constant which is minus h bar squared over 2 mu times l times l plus 1 after we cancel the terms here's what we get we get minus h bar squared over 2 mu times little r over u times little r times the second derivative of u with respect to r squared plus v r squared minus e r squared is equal to this constant if we divide by r squared and multiply by u we can simplify that and we can finally write the equation in this form that I've written it at the bottom where we have the second derivative of u with respect to r squared that looks like a potential that looks like a kinetic energy excuse me in one dimension then we have minus e squared times u of r that's the potential e squared over 4 pi epsilon not r times u of r that's the potential then we have another term times u of r which is h bar squared over 2 mu r squared times l times l plus 1 this whole thing then should equal e times u of r and that's just a one dimensional problem now on the interval r is equal to 0 to infinity so rather than minus infinity to infinity like x slightly different at 0 to infinity and we can write it simply as a pseudo kinetic energy operator operating on this wave function u of r plus v effective on u of r and v effective is the real electrostatic potential which has a 1 over r dependence plus h bar squared l times l plus 1 over 2 mu r squared and that looks just like a centrifugal term if I have something moving in a circle it has an energy l squared over 2i and that's exactly what we would expect if we have non-zero l so it has angular momentum then we have to add the energy of the angular momentum before we solve it for u so that's the nice simplification we still have to solve for u and then we have to solve for r in order to see what kind of electron density we expect to find as a function of little r which is the separation so that's what we'd like to know is if we've got a hydrogen atom where's the electron hanging out and what's the chance of finding it here or here and so forth the case when l is not equal to 0 is harder for us to do why because while we have that kinetic energy term and the potential energy 1 over r well we still have to figure out about that because last time we either had no potential energy for the particle in a box or 1 half kx squared for the harmonic oscillator now we've got something a little bit more challenging 1 over r in particular you might worry a little bit that when r is 0 what's that blows up and so maybe it's going to be very difficult to solve the differential equation it turns out that's not such a big worry and then we've got another term this l times l plus 1 upon r squared and so that adds another thing in so now we have a 1 over r and a 1 over r squared and we have to find the function that does that we're going to explore that how to do that step by step in the next lecture where we'll talk about these radial distribution functions.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:00:22 Spin 0:02:56 The Vector Model 0:11:06 The Commutator 0:22:25 Wavenumbers 0:29:45 Atomic Spectroscopy 0:34:34 The Hamiltonian Wavefunction 0:42:52 Separation of Variables
10.5446/18888 (DOI)
Welcome back to chemistry 131A. Last time we talked about more realistic oscillators. We talked about the 612 potential. We talked about the Morse potential. And then we got away from one dimensional system, systems with just one variable, which we were calling either X or R, whatever. And we did a particle in a box in two dimensions. And we discovered that under certain conditions there was degeneracy if the lengths of the box were the same and so forth. And three dimensions is the same. And the main weapon we had was that we took a two-dimensional equation that had two variables in it. And we separated it into two one-dimensional equations, both of which we already knew the solution to. And then we just substituted in the solution. Today, rather than talking about a particle in a box, I want to talk about a particle confined to a ring, first of all, which will be an interesting thing to solve. And particle confined to the surface of a sphere. And both these things will be important as a prelude to atoms. But both of them are completely artificial because if you say, well, I've got a particle confined to a ring, you have to ask yourself what kind of potential does that have that the particle stays just on a circular ring? And that's not a very realistic looking potential that if you just get off, if you're on the ring, you're just there with kinetic energy. And then if you're off the ring by epsilon, it's infinite or something like that. And so we could get some kind of anomalous looking behavior. But really what it means is we're going to set up the equation in two dimensions and then we're going to freeze one dimension and solve the other, which will be an angular variable. And we'll leave the radius of the ring as something we fix at the outset and then we go from there. And we'll leave the radius of the sphere as the same thing. And luckily when we do real atoms, at least for simple ones, where we don't have too many electrons, just one being the right amount, we can actually factorize the thing. And so we can use these solutions for the ring and the sphere and then just paste them together exactly like an onion, like growing an onion in shells. We argued early on that a particle confined to a ring, which we could think of as sort of like an electron in a classical orbit, would have to have a wavelength that fit. And the idea is that every time it goes around, it has to match up perfectly because if it doesn't, it cancels out. And in fact, there's nothing left for the probability because it's basically minus itself half the time. And so that was an argument that the de Broglie wavelength had to match. And our qualitative condition then is that we have to have an integral number of wavelengths and lambda has to equal the circumference of the ring, which is 2 pi r. If that condition is met, then we can go around like that and we can come back and we'll be at the same place. And that means that we've got a stable standing wave, a probability pattern that's not changing in space or time. Now, the de Broglie wavelength is related to h divided by the momentum, therefore we can just substitute for that n times h upon p is equal to 2 pi r. And then we can write that in a very suggestive way. We can divide both sides by 2 pi, turn the h into an h bar, and we can multiply both sides by p. And then we have n h bar is equal to r times p. That's interesting because r times p is going to have something to do with angular momentum. And if we have a particle on a ring, we automatically think of angular momentum. Whenever we've got anything going in a circle or confined like that, we think of the angular momentum of such a particle. We learned about that in classical physics. Let's have the ring oriented in the x, y plane. Then that means that the particle has an angular momentum, vector j is equal to r times the vector cross product of p. In this case, r points to the ring and p is the way the particle is going. So r and p are always at right angles. And in fact, j is equal to r cross p, which points in the z direction if we've got the particle in the x, y plane. And so we can take the z component of the angular momentum and relate it to the particle. And that means that we've got jz, which turns out to be r times p times sine theta, sine theta sine 90 degrees, or sine pi over 2. That's 1. So jz is equal to rp and that's equal to n h bar by the condition that the integral number of wavelengths fit in. And now sort of by this backdoor root, we have that angular momentum seems to be quantized. Photons came in units h nu, angular momentum comes in units of h bar. Now our time dependent Schrodinger equation for the particle on a ring is the following. It's the kinetic energy with respect to x plus the kinetic energy with respect to y, second partial derivative. Same thing as the particle in a two dimensional box is equal to e times psi of x, y. And the problem is x and y in this problem are very, very, very awkward variables. And that's because r is the square root of x squared plus y squared and so on and so forth. And if you just try to just bully your way through this equation using Cartesian coordinates which are set up for square problems, you'll never get anywhere. What you first have to do is you first have to change your variables so that you're in the same kind of funhouse mirror as the problem is. And then the problem will seem very easy. And that's what we're going to do. We've got a circular problem. We're going to use polar coordinates. So we're going to set up two new variables rather than x and y. We're going to set up r. Y because r is constant. That's the perfect thing to have. Then I get a derivative with respect to r, zero. So r is constant. And then the other variable is just where I am on the ring. Let's call that phi. Then I have a relationship that x is equal to r times cosine phi. That's this leg of the triangle. And y is equal to r sine phi. That's that leg of the triangle. And of course, r squared is equal to x squared plus y squared which is always constant. Now we have to make unfortunately a transformation not only of the variables x and y but of the derivatives. And that's trickier. So we need some actual multivariable calculus to do that transformation. And I'm not going to go through the transformation over and over and over because it gets very tedious to go through it. But I want you to see one time exactly how you do it so that you'll understand where these terms come from. To do this, then here's what we have to do. We have to use the chain rule from calculus. And we have to understand that if we have a function of more than one variable and I change something, I want to figure out the change. I should figure out the slope that it does with respect to the x direction if I change x. And then the slope it does with respect to the y direction if I change y and that will give me the total change in the function. I need to take two things and add them up. And for each one of them, if I use a different variable, I can use the chain rule. So here we go. The derivative of psi, the wave function with respect to r has to be psi dx times dx dr. You can think of these things just like fractions where you just cancel out the dx. That's how you can remember the chain rule. Deep psi dr is deep psi dx dx dr. That's the one direction. But I have to, it's a function of two variables. So I have to add deep psi dy, dy dr. And now I have a formula for y with respect to r. And so what I get is I get deep psi dx cosine phi plus deep psi dy sine phi. The second derivative is the derivative of this thing. And it gets messy fast, but it's a really good exercise in thinking clearly. Because what you do is you just take each of these terms and call deep psi dx some other thing. And then put that into the formula and then expand it back out very carefully. And you'll see. So don't skip any steps. So let's take another derivative. The second derivative of psi with respect to r squared is just d by dr of what we got, d psi dx cosine phi d dy sine phi. And I can put that in by factoring out a cosine phi and get the second derivative of psi dx squared dx dr. Plus the second derivative of psi with respect to x and then with respect to y dy dr. And you might say, well, how, what about the order of the x and y? And the answer is for wave functions and nice things that we deal with, any kind of function we have, we don't really care about the order. If we take the derivative, partial derivative with respect to y first or the partial derivative with respect to x first, those are equal. So we don't worry too much about that. And then we have sine phi times, again, two terms, both of them with a second derivative, dx dr dy dr. And if we add all that up, we get cos squared phi times the second derivative of psi with respect to x squared plus sine squared phi times the second derivative of psi with respect to y squared plus two sine phi cos phi times the mixed partial derivative. The derivative with respect to phi, I'm going to leave as a problem. We do it the same way. We start with d psi d phi and we write it as the chain rule. And then whenever we have x over r or something like that, we express it in terms of cosine phi and sine phi. And what you'll find is that you get a little bit longer expression here. Actually about the same length. But you get minus r times d psi dr plus r squared times three terms, which look very similar to the other three except slightly different ordering. And now we can get the relationship we need between the Cartesian and the polar second derivatives because now we have these second derivatives with respect to r and with respect to phi. And we know what they are in terms of with respect to x and y. It's still quite a bit of algebra. And again, I'll leave that for you. I'll quote the result so you can see what it is. But there's still quite a bit of algebra to do. And what you find out then is that the second derivative with respect to psi with respect to x squared plus the second derivative of psi with respect to y squared is equal to the second derivative of psi with respect to r squared plus 1 over r times d psi dr plus 1 over r squared times the second derivative of psi with respect to phi with respect to the angle. And we can check that we haven't made any mistakes because we can put in units. These all have units of length squared on the bottom. Forget about the wave function units for the time being. We have second derivative dr squared on the bottom, 1 over r dr, 1 over r squared d phi, phi doesn't have any length. The angle is just a ratio of things because it's in radians. And now at this point, we make the totally artificial assumption. We just say, hey, particles on a ring, let's just freeze r. And so wherever we see a derivative with respect to r, let's just say it's 0 because r's not changing. And that's great because now we just have 1 over r squared times the second derivative with respect to phi and that's looking except for phi instead of x, that's looking awfully similar to things we've already done. Therefore, the Schrodinger equation simplifies to this. Instead of having the Cartesian coordinates, we have minus h bar squared over 2m times 1 over r squared times the second derivative with respect to phi of psi, which is now only a function of phi because r's fixed. That's a constant. And that's equal to e times psi of phi. This is beginning to look really good and here's why. We know that m times r squared is the moment of inertia of a classical particle orbiting around an orbit of radius r. And we're doing a rotational problem by keeping the particle on a ring and we found that we just got the rotational constant, the moment of inertia coming into the problem, just naturally, just by the way it worked out. The differential equation then, if we write it in terms of the moment of inertia times the energy, is just the second derivative with respect to phi is equal to minus i e upon h bar squared times psi. And that the solution of an equation like that, not surprisingly, is an exponential. So we can write down the solution and we can write down, look, it's a e to the i mu phi plus b e to the minus i mu phi, mu is the square root of 2i e upon h bar. We can verify if we shove that in that we solve the Schrodinger equation, we get the answer. We need to have i because we have a minus. After we take the derivative twice, we get minus. That could be i squared is minus 1 or minus i squared. You could go around the other side to minus 1. But we can't have any real exponentials. And so these functions are things that are corkscrewing around one way or the other. And when they corkscrew around, they come back and meet. And they could go the other way, but they meet. The quantization arises because the wave function has to meet itself on the way around. And that means that if we add 2pi to the wave function, it has to be the same thing. If that weren't true, the wave function wouldn't be single valued in space. We could, depending what variable we pick to call it, it would have a different value. But it's the same point on the ring. So it has to have the same value. So that means it has to exactly come around. And that means that mu times 2pi is equal to 2pi n. So that n is an integer. So if we add 2pi to the angle, it has to be 2pi times an integer. And conventionally, rather than using n because n gets used for other things to do with energy, we use m, where m is an integer. And m is called the magnetic quantum number. Why is it magnetic? Because if we've got a charge on a ring, and we imagine it's moving around, a charge on a ring then is a current loop. And a current loop makes a magnetic field. That's exactly how you make an electromagnet. You take a ton of wire and you wind it around a core. And then you put some current through it. And you can pick up all kinds of stuff. And that's quite a fun thing to do when you're in elementary school. And I remember spending considerable time doing exactly that and seeing what I could and couldn't pick up. In fact, we can make another connection with classical mechanics. So the angular momentum, Lz, is just r cross Pz. And we can put in our quantum operators. We can put in x hat, Py hat, minus y hat, Px hat. That's what the z component of r cross P is if we set it up. And that's equal to minus ih bar times x, the derivative with respect to y, minus y, the derivative with respect to x. So that's our operator. And we can take that and we can convert that to polar coordinates by exactly the same tricks as what we did before. And if you take that particular combination and you're very careful and you convert to polar coordinates, you find out it comes out to be this real simple thing. Minus ih bar d by d phi, just the first derivative with respect to the angle. That's what we get. Well, that's really interesting because what that means is that when we did the energy, we said, well, we could have e to the im phi or e to the minus im phi. And we could have a of that plus b of that. We could have any amount of each. But if we want the particle to be in an eigenstate not only of energy but of angular momentum, then that means it's one corkscrew. It's either e to the plus im phi going one way or e to the minus im phi going the other way. And so what we do when we set up these problems for neatness is we either set a equal to zero and say it's negative m or we set b equal to zero and we say it's positive m. And so m can vary from minus some value to plus some value anywhere in between, including apparently zero. And the interpretation then is that the energy is equal because a particle going this way at some rate and a particle going the other way at some rate while they have the same kinetic energy. This then is our final solution, e to the im phi, m is any positive or negative integer. And apparently zero could be zero. Why not? That solves it as well. And that certainly doesn't have a problem. If there's no twist at all, if it's just flat, of course it meets up. If we normalize the wave function over the ring, that means that the probability that the particle is somewhere on the ring is one. If we do the integral since e to the im phi times e to the minus im phi is one, the integral is 2 pi. We don't integrate over r because r is not in the problem any longer. It's fixed. Just integrate over phi. Then we can get our normalized wave function for the particle on the ring. One over the square root of 2 pi times e to the im phi. The lowest energy here is m is equal to zero, which is zero. And this seems to run counter to what I've been saying, which is that whenever you have a confined particle, you should have some zero point energy. Why? Because you want to satisfy the uncertainty principle. And the reason why this seems to violate the uncertainty principle is just kind of glossed over in the book. But the reason why it seems to violate the uncertainty principle is first of all, we just threw r out. We really had a two-dimensional problem, but we froze one of the variables. Who says you can freeze one of those variables exactly like that? That's point one. And point two is that phi seems to have an artificial range. We say phi goes from zero to 2 pi, but it would be the same if phi went from zero to infinity because it keeps wrapping around over and over and over. And so if you argue well, you don't know phi. It could be anywhere between minus infinity and infinity, and it would still be somewhere on the ring. Then you could have the momentum be zero. Of course, you have to have the momentum be zero if you have the energy zero. And you could still have the position in terms of the actual value of phi be indeterminate. And it's kind of a mathematical dodge. But you have to be careful if you set up a problem and then impose a constraint that might not be physical. Say it has to be exactly on the ring. And then get in a tizzy that something doesn't seem to be quite right because the arguments may be quite subtle at that point. The probability density for an energy and an angular momentum eigenstate is independent of the angular variable phi. The probability density is flat. And of course, it would have to be because there's no reason why I should expect to find the particle more on one side of the ring than the other side of the ring when there's no difference between the sides of the ring. So it's got to be flat, and that makes perfect sense. If there's nothing to distinguish the two sides, how are we going to tell? And quantization again arises from the fact that the wave function has to match up. Has to be single valued. Now, the next step is to expand to a sphere. And a sphere, you could argue a sphere is a bunch of rings. And that might be a good way to look at it, in fact, is to pile up rings. It's artificial again to assume that the particle can go anywhere on the sphere but can't move radially at all. But nevertheless, it's a really good stepping stone to getting to the point where we can write down atomic orbitals and figure out what's going on. We've got a sphere. We know if we write down the kinetic energy, and because we've got a sphere and we're keeping it on the sphere, we've got no potential energy except this completely arbitrary potential energy that's keeping the particle right on the sphere. We have to convert to spherical polar coordinates. And the coordinate system here is x is equal, we have two variables. We have phi which goes around the z-axis, changes x to y and so forth. And we have theta which starts here and goes from the north pole where theta is 0 to the south pole where theta is pi or 180 degrees. But we don't go around again because every time we go here, we make a ring like a tree ring. And then we go here, we make another ring like a tree ring, and we go here to the equator, we do that one, and we go down here and then we finish down here. And if we went around again, we'd be counting all the rings again twice. So, phi varies from 0 to 2 pi or 360 degrees around. And the other one, theta varies from just 0 to 180. We adopt a right-handed coordinate system and that means that x cross y, your thumb points in the plus z direction. And that's by far the easiest way to do it because if somebody draws a figure where x is shooting this way and y is that way and you try to reorient it in your mind so x is to the right and y is into the paper, it takes a long time mentally. It's quite a gymnastic. But you just grab your hand and do it, you can figure it out. Be careful though because what you tend to do when you're doing problems is you have the pen, if you're right-handed, you have the pen in your right hand. And you want to figure out which way something's going, you use your free hand. And that's the wrong hand. And if you do that on an exam in physics especially, you just get marked wrong because you didn't use a right-handed coordinate system. And I've watched that happen to people at various stages of my career. So here's a right-handed coordinate system, theta and phi are set out. And our position is given by three numbers, r, the distance out, theta, the distance down from the north pole, and phi, the distance around from the x-axis. What's very confusing is that if you take a course in math for some reason that I cannot fathom, the variables theta and phi are swapped. And so theta is the one this way and phi is the one this way. And I think that's because in math when you do two-dimensional problems, you always call it theta and they just like to keep theta for the same variable and use phi for the other one. But in physics, it's phi around the z-axis and theta away from the z-axis. And you have to keep them straight because if you open the wrong book, you'll get things backwards and you'll get a terrible mess then. Let's try a practice problem. Let's consider a volume element. dv is equal to dx dy dz in Cartesian coordinates. It's a little cube. What's the volume element for spherical polar coordinates? So let's take a radius r, angle theta from the z-axis. What's the formula for the volume? Well, we don't care what phi is because it's always the same here. But we do care what theta is and here's why. The size of the onion ring here at the top near the north pole is tiny. When theta is small, as theta gets smaller, the total size of the ring gets smaller and smaller. As we go toward the equator, the same ring around 360 degrees in phi is much bigger in terms of the volume that it'll hold if we take a little slice in r. And so we have to weight the volume elements by how close we are to the north pole. So like imagining you can walk around the north pole in a little circle and you've walked around all possible longitudes there. But if you try to do it at the equator, it's a very long walk. Same thing. So here's the beauty of calculus is that we actually draw this thing as a wedge because r is changing and so the inner part is smaller than the outer part and it's like a little cone. But when we have just very tiny differences, then it's like a cube. And all we need to do is take the size of the cube. And that's the beauty of taking very small things, dx's, that no matter how curvy something is, if you take it small enough, it's a straight line. That's why calculus is so great. And so we can figure out the distance here out is r sine theta. And so the distance along if I move by d phi is this distance here is r sine theta d phi. And the distance the other way is just r because that's a full distance times d theta. And the distance in the third direction is just dr. Neither theta nor phi changes. And so we can take those and multiply them together. And we get the volume is r squared sine theta dr d theta d phi. That's important to know because you're going to have integrals to do with psi and so forth in them. And they're going to have dv because you have to integrate over all space. And you need to know what dv is in terms of these variables and now you know what it is. And exactly the same way that we did before but I don't want to take an hour to go through it carefully. But I'll just leave it to you if you're interested to work it out once. We can take the Cartesian second derivatives, second derivative with respect to x, second derivative with respect to y plus the second derivative with respect to z. And we can cast it in the following form. Second derivative with respect to r plus 2 over r times the first derivative with respect to r plus 1 over r squared times this operator I've called lambda squared. And lambda squared has nothing in it except theta and phi. It has some sine squared theta, second derivative with respect to phi. And the second term which is written in a very funny way, 1 over sine theta d by d theta of sine theta d by d theta. Unless you're used to dealing with operators, this is written in a very compact, very nice way so you don't have a lot of terms. But you have to be quite careful when you actually put it on a wave function because you only put the wave function on the right of the operator. You don't start inserting it in between things. You just put it on the right and then you go sequentially. Right, take the derivative with respect to theta. Malt by by sine theta. Take the derivative again and so forth. But if you don't understand the operator notation, then you're very likely to get things wrong because you may stick a psi in wherever you think there's a blank. And that's not correct. This thing, lambda squared is called the Legendrian. It's very famous, the Legendre polynomials and so forth. And as I said, the operator takes a bit of getting used to. Only put the wave function on the right. Be careful about that. Don't put psi in front of both those derivatives. The operator lambda squared, the Legendrian, has all the angular energy in the Hamiltonian because the other operators had derivatives with respect to r. If we freeze r, we don't allow any change in r. There's no energy that way. That means that this thing, lambda squared, is what we want to focus on. And it's just the energy to do with all the possible angular motions of things on a sphere. So let's fix r. Throw that out again. Same way as we did with the particle on a ring. And now we've got this new equation to solve. Minus h bar squared over 2mr squared times lambda squared on the wave function, which is a function of theta and phi, is equal to some energy times a function of theta and phi. And if we can find the wave functions that solve that eigenvalue equation and the eigenvalues, then we know the energy and we know the possible wave functions on a sphere. And of course, we expect that it's going to be quantized and so on because it's a trapped thing again. And as we go around in theta and phi, it's much more complicated this time because there's two of them, so it's harder to see. But it's got to be similar as what we had before. There's no potential energy here. The only requirement on the quantization is just that the wave function fit into the space. Okay, let's do a practice problem here, then practice problem 14. Let's show that the angular Schrodinger equation is separable. And what does that mean? That means that whatever this wave function in theta and phi is, we can write it as a product of something that's only a function of theta and something else that's only a function of phi. And if we can do that, then we'd guess that the something else that's only a function of phi is what we had before because last time when we did two-dimensional particle on a box, then okay, it was a product and the X was the same one as what we had before. And so theta is different from phi because phi goes around in 2 pi and theta only stops here. So we wouldn't expect it's going to be so easy as that because theta could be different than phi. But still, phi should be the same as what it was before. And so that we got that, e to the i m phi stuff, that saves us a lot of work. Okay, here's what we got to show. It's separable if we can write the solution as a product and if we want to prove that, what we have to do is we have to rearrange the equation so that we have two terms, one of which only depends on theta plus or minus something. Another term that only depends on phi is equal to a constant. And then we make the same argument. If we fix theta and change phi, if it's a constant, that means that both of them are constant. Otherwise, that wouldn't work. And that means that we can do a one-dimensional equation for each one. So let's try the trial product solution. Again, we used capital X of X. Let's use capital theta of theta. It's just some function. We don't know what it is, but we don't care at this point. We use capital phi of phi and substitute it in. So we've got the Legendrean on this thing. And now we get a number and here epsilon, this number minus epsilon is just 2iE over h bar squared and i is the moment of inertia. So that just cleans it up so that we don't have to write a lot of extra terms. Now, if we substitute this in, here's what we find. We have 1 over sine squared, second derivative with respect to phi of this product. Plus 1 over sine theta, first derivative with respect to theta of sine theta, first derivative with respect to theta of the product is equal to minus epsilon times the product. I do the same thing. I divide both sides by the product. And first, I say, haha, I've got the product of two things. Second derivative of, with respect to phi of some function of theta and a function of phi. The function of theta is a constant so I can pull it out because it doesn't matter where it is. That's what the partial derivative means. In the second term where the derivative is with respect to theta, I can pull the function big phi out in front as a constant because it's just a bunch of derivatives with respect to theta and we treat phi as a constant. So let's pull it out just like the same way we pull out a constant in a derivative. And if we pull those out in front, that makes it much easier to see. Now we've got big theta out in front, sine theta squared, second derivative with respect to phi squared and then the rest that you can see is equal to minus epsilon times the product. And now if I divide by the product on both sides, the theta goes away and the term with phi and the phi goes away and the term with theta. But we still have this sine squared so we have to multiply through the whole equation by sine squared. And then if we do that, we finally find the following. We have 1 over phi times the second derivative with respect to phi. That's one term. Plus a bunch of gobbledygook but it doesn't matter what it is because it doesn't have phi. It's all theta and it has an epsilon sine squared theta. That's equal to zero. That's good enough because I have this thing over here which is just phi and this thing over here which is just theta and so I've got two equations. One of them is just phi, the other one is just theta and I'm in. And we can switch, once we've got it down to one variable, we can switch to the regular D instead of the funny D because it's the same thing and we can solve the differential equation. So the first term, as I said, is going to give us exactly what we had before in the particle on a ring because it's the same equation basically as the particle on a ring equation. It's what it's actually going to be will depend on what theta happens to be but that doesn't bother us. We have e to the i m phi and then whatever theta happens to be is going to be some other function that's going to be our business to solve in the second part of the equation which is a different equation to do. And so we can break this very complex problem up into this series of problems and solve them and it turns out for a real problem with an atom, what we're going to do is we're going to first break up the problem into the particle on a sphere and then we're just going to let R be the other variable and not surprisingly we're going to try a product, we're going to say G. I think that the wave function is a product of some function of R times some function of theta times some function of phi and see if that doesn't work and then that gives us the clue then of how to factorize the thing and see that it works. One will that fail? Well if it'll fail right away unfortunately if we have two electrons because if we have two electrons then where each of them happens to be and they're repelling each other and they're attracting to the nucleus, it gets too difficult. We can't separate the equation and so we run into problems. In that case what we do is we treat each electron separately, solve it and then take the electron, electron repulsion part that we couldn't handle and we couldn't separate and we treat that as a perturbation. How good that will be will depend a lot on how close the electrons are getting. You can imagine that if the electrons happen to get pretty close in space that the potential could get quite high and so the electrons will tend to avoid each other and that means that their motion is correlated. They're sort of like a cat chasing its tail. When one starts going this way the other may go that way and so forth and so the electrons may not be independently moving around and that kind of electron correlation is a very important aspect of multi-electron atoms and higher dimensional systems but we'll touch on that later on in the course and for now we'll leave it there. Please take time and look through these transformations and spend a little time in a quiet room with a pencil and a piece of paper and just methodically go through and take each step and at each step say what does it mean? What am I doing? Why can I do that and go through it and see if you can't figure out why these things have the structure they do. The d by d theta sine theta stuff with the one sine theta, one over sine theta out in front, that's going to be a little bit tricky to get but you can get that too if you work on it. And next time what we'll do is we'll pick up our solution. We can guess the solution in phi but we can't figure out the solution in theta yet because that's a different differential equation with sine theta in it and we haven't figured out anything to do with that equation yet. So that's a separate equation for us to solve and we'll have to figure out what kind of techniques we need to solve that equation and then figure out what these functions are and then hopefully we can get some idea of what these functions on a sphere actually look like. So we'll leave it there and then pick it up next time to figure out the actual wave functions for a particle on a sphere.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:02:23 Particle on a Ring 0:16:49 Quantization 0:24:23 Preparation of Atoms 0:27:16 Spherical Polar Coordinates 0:31:18 Particle on a Sphere 0:33:03 The Legendrian 0:35:06 Spherical Polar Coordinates
10.5446/18887 (DOI)
Welcome back to Chemistry 131A. Today we're going to talk about more realistic vibrational potentials, the Morse potential, the 612 potential, and we're also going to extend our vision into at least two spatial dimensions. Rather than talking about a particle in a one-dimensional box, we'll extend it to two spatial dimensions and see what complexities that involves. It turns out for a particle in a box that won't be serious, but depending on the form of the potential, having more than one spatial dimension can make the equations much more difficult to solve. When we last left, I made the remark that we didn't have to do a certain integral and the reason why we didn't have to do it is because the integrand was odd or anti-symmetric and the interval was symmetric. And I'd like to just take a second to show you how that works as a practice problem. So suppose we consider an anti-symmetric function of X. That is that the function of X is equal to minus the function of minus X. Then we have to show that the integral of such a function is zero between symmetric limits. Just the statement of the problem gives us a clue how we're going to have to proceed. We're going to have to make use of the symmetric limits because it certainly isn't true if we integrate from zero to something. That has a finite value. We're going to also have to use the fact that when we change the sign of the function, the argument that the function changes sign. Let's see how we can do this. The trick here, and it's a very good trick to remember, is that we don't actually try to show that the integral is zero. That turns out to be pretty difficult. What we instead do is we show that the integral is equal to minus itself. And then the only number that's equal to minus itself is zero. Oftentimes very tricky things in math rely on not showing that X is two, but showing that X is greater than or equal to two, and that X is less than or equal to two, and then X has to be two. And this is a very good trick to remember if you get stuck on some mathematical problem, is to try to sort of paint it into a corner by inequalities or relationships rather than just trying to get it spot on. Here's what we can do then. The dummy variable that we use in integration, the value of the integral doesn't depend on that. That's just a notational consideration. Therefore, if I call the integral I, which is the integral from X is minus A to A of the function F of X. That's also equal to the integral from U equals minus A to A, F of U, DU, same thing. Now I just make the substitution that U is equal to minus X. So I put in minus X equals minus A for the lower limit, that's U equals minus A. Minus X equals plus A for the upper limit. F of U is F of minus X, D minus X. And then I bring out the minus sign from the D minus X, and I have minus F of X from minus X equals A to minus X equals plus A. And then I use the fact that the function is anti-symmetric, and I change minus F of X to F of X. And then I just say, look, if minus X is equal to A, that means, excuse me, if minus X is equal to minus A, then that means X is equal to A. And so I'm really integrating from X equals A to X equals minus A. But I know that's equal to minus if I swap the limits. And that's equal to minus I. And so by that chain there, I've shown that the integral I, whatever it is, is equal to minus I, and therefore the integral is zero. Okay, let's move on and talk about a more realistic potential. The harmonic oscillator gave us a differential equation that we could solve. It was harder than a particle in a box, but we could solve it. We could with a little mathematical sophistication, write down all the wave functions for all the excited states, and there are an infinite number of them. But it's not very realistic for a real chemical bond, because atoms, when they get too far apart, don't interact very well. We know that from gases and other systems. We can only have a chemical bond with a force constant restoring the atoms to an equilibrium position when they're rather close. Otherwise, we would suddenly be bonding to all sorts of things, which does not happen. But it, so the harmonic oscillator is a very bad approximation, unless we're near the bottom of the potential. If we're near the bottom of the potential, it's quite good. As we get out further, it becomes very bad. And if we're interested in breaking bonds or simulating breaking of bonds somehow, it's extremely bad. Therefore, we need something that's a little bit better. And therefore, the question is, is there an alternative potential, which one is more realistic, and two still allows us to get the exact solutions for the time-independent Schrodinger equation, to get the energy levels and so forth and so on. And the answer to both questions is yes, there is such a potential. And it's called the Morse potential after Philip Morse, who first suggested it in 1929. It did not take long after wave mechanics was invented for people to start working like crazy on this new field and make all sorts of refinements to the simple problems and really go in depth and advance the field. And this can model vibrations much better than the harmonic oscillator, especially for excited states in a molecule. The number of bound states in the Morse potential is finite. And that means that there is a limit after which the chemical bond breaks and the atoms fly apart. That's already a substantial improvement over the harmonic oscillator. And furthermore, even the bottom part of the well, where we haven't gone up very far, is represented much better with the Morse oscillator than the harmonic oscillator. In fact, the energy levels always get closer together with the Morse oscillator as we go up as we'll see. Okay, what's the functional form? Well, it's something that's not very obvious at all. It's the potential as a function of R should be some constant DE with units of energy times the quantity 1 minus E to the minus A times R minus RE quantity squared. That's not a very obvious potential to choose. And it's not at all obvious at first blush that one could solve that potential in the Schrodinger equation and get solutions. It looks to be fairly complex. Here, D is called the well depth and A is called the width parameter. And so we have two things to control. We can control how deep the well is and we can control how wide it is. The depth has to do with the total energy before we dissociate. And the width has to do with the spring force constant and the depth. They're related. When A is small, the parameter in the exponential, what that means is that the well is narrow and there's a larger force constant. Here's a comparison that I've adapted from Wikipedia of the harmonic oscillator which is in green and the Morse oscillator which is in blue. And there are a couple of things to notice. The Morse oscillator is this beautiful function which is asymmetric in pretty much a very realistic way for a real chemical bond. The levels get closer and closer together as we go up. And there's a definite prediction about what the dissociation energy is. The dissociation energy is not the well depth but it's a little bit different than the well depth because of the zero point energy of the oscillator. And we call the dissociation energy D naught and the well depth D e. And there's an equilibrium distance which we've assumed to be identical for the two oscillators in this case, r sub e, the equilibrium distance. Now, the energy levels for the Morse oscillator which we won't solve the equation and go through all that in this course. But just to quote the result, the energy levels of the Morse oscillator get linearly closer as we go up. The spacing between zero and one is one unit. Then the spacing between one and two might be point nine and then point eight and point seven. And that equation fits this functional form. The energy of quantum level V which is an integer number is h bar omega times V plus one half. That's exactly the same thing as what we got for the harmonic oscillator. Except that kept going and going. And then there's a new term which shows you how clever this potential is because it gives you what you had before and then it gives you just the right correction to make it more realistic. Minus h bar omega x e times quantity V plus one half squared. And therefore there's this quadratic term that's lowering the ladder of states as we go up. Clearly there's some value of V where the quadratic term gets so big that the thing would turn around and head back down. And at that value of V where it starts doing that, that's the highest one. After that the bond is broken and we don't consider that energy equation any longer. And the dimensionless parameter x sub e is just called the anharmonicity constant. It's a measure of the distortion away from a harmonic oscillator and to a different kind of motion that hangs out here for a long time and is definitely not a harmonic oscillator in terms of going back and forth evenly. And as I mentioned at some point V max, the energy of V max plus one according to the formula anyway would be less than the energy of V max. The states stop going up. At that point that's the highest level that can be supported in the potential and that allows us to count how many states could be in there and get an idea of that. The wave functions for the Morse oscillator are quite a bit more tedious to calculate than the wave functions for the harmonic oscillator. And the harmonic oscillator was already much harder than the particle in a box. And therefore we're not going to write down the explicit form of the wave functions. What we usually want in chemistry is the energies to get an idea of what kinds of processes can happen if we put in light of a certain energy. Will it be absorbed? Could I get this thing to break a bond and so forth and so on? Occasionally we're really interested in detail and we'd like to have some idea of the electron density but usually it's very, very hard to calculate the wave function and so there are other tricks like density functional theory to calculate the electron density without actually trying to calculate the full blown wave function. For real bonds then just building on this Morse theory one could suppose that you could have another anharmonicity and in this case it's Y e. For all real bonds the energy levels get closer together when they're near the bottom and that's why the first term has a minus sign because chemists like positive numbers. So we like X e to be a positive number so we put a minus sign there so that X e is a positive number. But Y e the next term in v plus 1 half cubed in this power series then in the energy as a function of v is could be plus or minus and it could depend a little bit on how many levels you're trying to incorporate into this equation when you do the fitting. And so for that one we have a plus and we allow it to be Y e to be either plus or minus as the case may be. In chem 131b what you'll find out in IR spectroscopy when you do that as an actual method of analysis is that you'll find out that IR or infrared spectroscopy is very important method to glean all kinds of information about the strengths of bonds and rotational and vibrational states of simple molecules. And in fact it's of course the vibrations of molecules like CO2 that are causing all the problems with climate change and radiative forcing as it's called. As the earth emits infrared radiation which you can think of as heat comes out at certain wavelengths some of it may actually instead of going out into outer space and cooling off like taking cookies out of an oven or putting a pie on a window sill some of it may actually hit CO2, an excited vibration in CO2. And CO2 may vibrate for a while and then it may decide to emit a photon. And when it emits it it has lost memory of all direction. It may have spun around and may have done all kinds of things in between and so it emits it in all directions. In particular it emits it back toward the ground. And what that means is that the rate of escape of radiation is altered and when you change the rate of escape of heat you heat up. That's all you do when you put on a parka. You change the rate of escape of heat from your body and you're much warmer than if you don't have it on. And so if you, we study the vibrational modes of these molecules in the lab we can predict which ones for example might be very bad to release in large amounts into the atmosphere. And unfortunately one of those is exactly the molecule that they put into car air conditioners hydrofluorocarbons which have very great greenhouse gas radiative forcing and could be very bad to release. The force constants that we obtain when we look at these bonds really agree with exactly what we would think would happen when we have a single bond or a double bond or a weak bond or a strong bond. If the bond is strong then the force constant is pretty big and if the bond is weaker it's small. And here are some examples. The force constant here is in newtons per meter. For hydrogen fluoride it's 970, reasonably strong. For hydrogen iodide it's only 320. Both of those are single bonds but the overlap between the orbitals on fluorine and hydrogen is much better than iodine which is a great big thing with diffuse elephant ear orbitals makes a very weak bond. And carbon monoxide has a force constant of around 1860. When we draw the Lewis structure for carbon monoxide we draw three bonds between the carbon and the oxygen. So that makes sense that that would be an extremely strong one. And other ones with double bonds are in between. And so this at least makes very good qualitative sense when we compare what our notional idea of a strength of a bond is with this force constant of this spring and this quantum mechanical model that we're solving. Now there's another potential that is used which you should know about and which is of historical importance mainly because of its simplicity and the way computers worked. And this potential is called the Leonard Jones or 612 potential. It's commonly used for example to simulate liquids. In the early days of computer simulation we couldn't do complex systems like membrane proteins and all these other things that we can do now. In fact what we started out doing was the simplest things like noble gas liquids try to simulate a liquid argon, something not doing too much and see if you can figure out when based on the potential when it will solidify, when it will boil, when it will do this and that. And that can already be an extremely challenging problem with the 18 electrons you have there. But you can make a very approximate model of this because when two molecules, when two atoms are far apart there is a very decent model for how they will attract. If there's any kind of charge fluctuation of the charge cloud of one atom and it's very unlikely that if you have 18 electrons for example that they're all going to be totally symmetrically distributed every which way at every instant. So suppose just for a tiny bit of time the electrons have 10 on one side and 8 on the other. Well then this side looks positive and if anything's nearby that then these electrons may rush over because they may see that there's a positive charge here. And so they tend to attract and that means they always tend to attract. And when you work out the way it works with two dipoles, one over r cubed, you get an attraction that goes like one over r to the sixth. So for a long way away it's flat and then as you come in it starts going down, down, down like one over r to the sixth. But we know when they get too close that they repel. And they repel because when the electron clouds overlap, Polly tells us that we can't have electrons in the same orbital with the same quantum number more than two and then even then they have to have opposite spin. So they repel, they bounce off and there has to be a strong repulsive force. There's not such a good theory however about what that repulsive force should be. And but we know it's very strong. The Morse potential while it gives this nice theory has one thing that is a little bit troubling and that is if you go back and I'll let you do that, go back to the formula and you put in r equals to zero so that the two particles, the two nuclei we think of them as positively charged are right on top of each other or extremely close. We'd expect tremendous repulsion under those circumstances. But the Morse potential gives us some finite value. It doesn't go up to infinity. Infinity may be too big but a finite value like the Morse potential may be much too small especially when r is tiny. And usually we aren't modeling such small values of r anyway but if we have things hitting hard, we want to make sure that they hit hard and that they repel appropriately if we're trying to model that kind of behavior. The 612 potential instead of having a finite value like the Morse potential, we just add a term plus 1 over r to the 12 and that goes up much, much faster than minus 1 over r to the 6 goes down. So we get something that goes down to a minimum and then goes up like crazy when you get too short and then we can adjust by some terms. We can adjust how fast, deep it goes down and what the equilibrium is for the best situation, the minimum energy. So the numbers in 612 refer to exponents, the 1 over r to the 6, 1 over r to the 12. And that it's a very nice functional form. Here it is, v of r's just could be a number like a over r to the 12 minus b over r to the 6. That could be a simple way to put it in. And a and b are your two parameters that allow you to control how deep it goes down and what the equilibrium is for the best distance between. In other words, how big do I have basket balls? Do I have tennis balls? You know, what's the right scale? Usually we tidy it up though, mathematically like this. We say v of r is epsilon, some number with units of energy, times rm over r to the 12 minus 2rm over r to the 6. And rm is the minimum and epsilon is the depth of the energy minimum at r equals rm. And as I said, rm is the minimum value where we're at the equilibrium where it's most like a harmonic oscillator, for example. Now why pick r to the 12? Why not pick something else? And the legacy of that is kind of interesting. In the old days, digital computers were so slow that it was very time consuming to compute this potential. And if I want to simulate a lot of particles moving around and I got to figure out who's attracting whom and how much and who's repelling, then I have to basically compute these v of r over and over and over and over. And computing something like an exponential function or a cosine or sine in the very early days, that was just a lot of machine code instructions and took a very long time to do. And so it slowed everything down. The beauty of this 612 potential is the fluctuations go like r to the minus 6. And then once I've got r to the minus 6, so I know what that number is, I just square it, I just multiply it by itself and I get the repulsive part r to the plus 12. And that's just one multiplication. And that's why it became so popular because it let you simulate things for a lot longer than anybody else could with some other more realistic potential that might involve an exponential function or something else. And then it let you do some interesting numerical experiments on the computer. It's just amazing how much more powerful computers are today than they were when people were doing landing on the moon and things like that. But the computer I used as a graduate student had 8k of memory. And I remember quite clearly when they upgraded it to 32k, I thought, what am I going to do with all that massive amount of RAM? How could I possibly use it all? And when we wrote programs, we used three variables, X, Y and Z, because you didn't want to run out of memory. And because you didn't want to run out of disk, you basically never put any comments on because that would take more space on the disk. And at the end of the day, you had to be extremely careful to note yourself what you were doing because nothing had any name that made any sense. It was all X, Y and Z, I, J and K and so on. And nothing had any comments. And therefore, if the program went south and you made an error, it could be extremely difficult to debug. Okay. As I mentioned, r to the plus 12 apparently, r to the minus 12, excuse me, plus some number over r to the minus 12 has no theoretical justification. But it was just easy to compute. There are other forms and nowadays, Leonard Jones is still used quite commonly because it's still fast. But there are other forms and computer power is so much higher that it really depends on what you're doing, how accurately you want to compute these potentials, what kind of form you use, what you're trying to match in terms of physical properties. Okay. Let's move now to simple quantum systems in 2D and 3D and talk a little bit about degeneracy which is quite important in real systems. Ultimately, to do anything with real atoms or molecules, we have to be able to solve in at least three spatial dimensions, X, Y and Z. But as a stepping stone, let's try solving in two first and get the math down for that. And then we can extend it to three. How easy or hard it is to solve these equations really comes down to the potential. If the potential has a simple form, then it's easy. If it has terms like X times Y in it, it can be very, very difficult to do. Usually the best case is no potential. That's a particle in a two-dimensional box. That's what we're going to do first. Nothing can go wrong there. And after that, some kind of potential that's additive separately in X, Y and Z but doesn't have any cross terms. Doesn't have anything X times Z or anything like that. Okay. Suppose we have a particle and it's confined to a 2D region. So here's what we're going to say. The potential energy is zero if X is between zero and LX. And Y is between zero and LY. And otherwise, if you're outside of the box, the potential is infinite. We know by our argument of a one-dimensional box that means the wave function has to vanish at the edges both with X and Y. Otherwise, the total energy is infinite. The particle cannot have infinite energy. Now we have a two-dimensional equation. Luckily, and psi of X, Y, we don't know what the form of psi of X, Y is. But we have just the kinetic energy minus H bar squared over 2M, second derivative with respect to X. We've got to use partial derivatives because we have to keep clear what we're keeping constant and what we're letting be a variable. Plus the same kind of term, second derivative with respect to Y, psi of X, Y is E psi of X, Y. And as I mentioned, we have to use partial derivatives because we have to be absolutely clear what we're keeping constant and not when we have both X and Y in the wave function. The first thing you always try whenever you have an equation like this, even if you don't know anything about it, is you try taking a product. You try saying, I think that psi of X, Y is something, function in X, F of X times G of Y. Now that doesn't encapsulate all possibilities by any means. But what it does do is it means that the, something like this, the energy in X and the energy in Y are separate from each other. And they're just going to add up separately. And so that's the simplest assumption, is that they have nothing to do with each other. And fortunately for the simplest kinds of atoms like the hydrogen atom, that's really good. And we can figure out all the beautiful wave functions and make these beautiful electronic orbitals which we'll get into later on in this course. If this assumption that psi is a product, F of X times G of Y, works, then it solves the problem immediately and you're done. And it doesn't take too long to see if it's going to work. If it doesn't work, that means that you've got a bit of a nasty equation, then you consult an expert. That's why we have experts. They're in their offices there with all their papers piled up and they can solve a lot of mathematical problems or you go to a reference book on differential equations or you take a more advanced course in differential equations. And all those things are good things to be able to do so that when you hit a problem like that, you can solve it. If you can't solve difficult problems quickly, the thing is you lose your train of thought. And you may hit a problem and you may say, gee, if I could solve this, I could figure out this other thing and so forth. But if you keep hitting it and you can't solve it and you don't know where it's going and you can't make progress, it's like reading a novel letter by letter and it takes you a long time to recognize you've got a word. The problem is if you go so slowly like that, you can't even get the train of the story because by the time you get halfway through the chapter, you've forgotten what the plot was. And it's a little bit like that in these branches of science. If you can't see things quickly enough by learning how to do it, it takes too long and then you get lost. And then the problem is that you don't even glean the chemical, the physical chemistry knowledge out of it because you get lost in this labyrinthine maze of mathematics. Okay. Now, if you can't solve it and there's no expert, what you can do is you can try to throw away certain things from the Schrodinger equation that are crossing you up and then you can treat them as a perturbation because at least we know how to do that. So we can take any nasty term that prevents us from factorizing it and put that aside and then we can calculate a correction based on perturbation theory. And that can also be a good way to go if you want to figure out something quickly, whether something is going to have a certain effect or not. How do we know that it's worked? Well, when the 2D equation separates into two 1D equations. 1D equation is some equation that has derivatives with respect to x and blah, blah, blah, x, x, x, x, nothing else. And then another equation that has everything with respect to y and nothing else. But there's no psi of x, y left. If you can separate it like that, it's worked. If you cannot, then you're going to have to either treat it as a perturbation or get an expert to help you. It's commonplace to use capital X of x rather than F of x and capital Y of y rather than G of y. But it's the same thing. Capital Y of y is a function that depends only on the argument y. It has no x's in it and x is likewise. They can have constants, of course, but they can't have x. Now our equation looks like this. The energy operator, kinetic energy operator. And now rather than operating on psi of x, y, it operates on x of x times y of y. And it gives back e times x of x times y of y. So we're just putting this in as a guess. And in the partial derivative with respect to x, we can take y of y, which doesn't have any x's, as a constant. And we can move it through the partial derivative operator and put it out in front as a constant. And vice versa. On the other derivative, we can move the part that we don't have to worry about out in front. And then after we've done these derivatives, we now have y of y times the second derivative of x of x plus x of x times the second derivative of y of y, all times minus h bar squared over 2. And that's equal to the energy. Now there's one more trick. And the one more trick is we divide both sides by x of x times y of y. And if we do that, then the y of y in the first term goes away. And we end up with 1 over big x of x times the second derivative of x with respect to x. And the other term has 1 over y times the second derivative of y with respect to y. And that's now equal to e, some number. Now the first term has nothing to do with y. So if we look at the first term, it has no y's in it. The second term has a bunch of y's in it. And so if I change little y in the second term, it's going to be different values. And that means that it can't add up to the energy. So if I pick a particular value of x and let y move around, that can't be a constant unless x itself, that first term is a constant, separately, some constant. And the second term, which has only y in it, is also some constant. And that's the argument that allows us to separate the equation into two. They have to separate because when I hold one of them fixed and let the other one fluctuate around, the answer doesn't change. Therefore, they each separately have to come to some constant. Otherwise, that could not be true. So the sum is a constant and that lets us write these two equations. Now I've said, okay, the energy for the x part is e sub x, the energy for the y part is e sub y. And now because I don't have any other variables except x and one and y and the other, I can also switch from the funny looking d for partial derivative to the regular d. And I end up with the same equation that I already had, which was the equation for a one dimensional particle in a box. One of them uses the variable x and is from 0 to l sub x. The other uses the variable y, big deal, it's from 0 to l sub y. And we know what the solutions are because we worked those out. Big x of x is the square root of 2 over lx times the sine of nx pi x over lx. n is an integer, 1, 2, 3, etc. And big y of y is the same thing, just instead of x we use y. And our solution then is the product of those. So rather than one sine function this way in one dimension, we've got one this way and then we've got one this way as well. And that means that at the corners where they're both small, the particle really avoids the corners. The opposite of a mouse really stays away from the corners, stays in the center of the box in both dimensions in the ground state. So our final solution for the whole thing when we put in the energy and so forth, I've written out here in gory detail. The energy e depends on two quantum numbers, it depends on nx, which is the amount of excitation in the x direction. You can think of that as something to do with how the particle is moving that way. And then n sub y, which is the amount of energy in the y direction, you can think of that as how the particle is moving in the y direction, which is independent of x. And instead of just one, it's just h squared over 8m times quantity nx squared over lx squared plus ny squared over ly squared. And the wave function is the product. And we've normalized it so that the integral over the whole box in both dimensions is equal to 1. We can clearly see the effect of confinement by looking at real quantum systems. It wouldn't surprise you that if I take a 3D box and then I make the assumption that the wave function is big x of x times big y of y times big z of z and I go through the same machinations, I find it works. And I find the energy has another term that has nz squared over lz squared. And therefore, if I make particles of certain sizes that are otherwise the same, well, it could have an electron in the particle that's basically rattling around a so-called quantum dot, then if I look at the light emitted from this electron moving between levels like some level n equals 2 to 1 and emitting light, which can happen, I can get an idea about the size of the box by looking at the light emitted. Now, how do I get the electron to go up to some high level? Where I put in a photon with enough energy. Remember, with the photoelectric effect, we could kick electrons up by using light of a certain energy. We showed potassium there. And if we put in light with a certain number of electron volts of energy per photon, we could kick an electron up. And so we can put in invisible light, UV light, which we can't see. We can excite electron up. It rattles around. It comes down. It emits a photon. And the photon has a certain frequency and wavelength. And if it's in the visible region, we can just see it. And this has a practical application. So here I've shown this is a beautiful illustration, which I've adapted, again, from Wikipedia, that shows when you have little boxes of different sizes that you get different wavelengths of light. And you can actually choose what you want by choosing the size of the box. Well, that's going to be extremely useful if you've got an application for a display. And you want the color to be red. Or you want this or that. And so this has immediate applications in all kinds of fields. This ability to just change the size of the box, not change the material necessarily, but just change the size by how long you let it go or how you heat it or various other tricks of colloidal synthesis. And just change the appearance of the final thing. Okay. Let's do a practice problem now on degeneracy, because degeneracy comes in in this two-dimensional box. Let's go ahead and consider a 2D quantum box. And let's ask the following question. Under what circumstances would there be more than one distinct wave function with the same energy? That's what degeneracy means. It means that there's more than one wave function that has the same energy as another one, but is a different function. Usually what happens is if you've got degeneracy, it means that you've got some kind of symmetry. If you've got a symmetry in the problem, then oftentimes you end up with degeneracy. And likewise, degeneracy can be a clue that you have symmetry. There's also something called accidental degeneracy, which just happens that two things just happen to be close. But that's not likely. That's not likely to happen. Usually it's symmetry related. So let's have a look what would be a symmetry. Well, if we look at the quantized energy, we see it has Nx squared over Lx squared plus Ny squared over Ly squared. We can easily see, for example, the easiest case is suppose we choose Lx and Ly to be the same. So instead of a rectangle, so we have a square. Then if Nx is 1 and Ny is 2, that's the same energy as Nx2 and Ny1. And they're related by symmetry. Now this is maybe not the only condition. I'll let you puzzle about how you might discover all the conditions and it's kind of an interesting problem. But let's just have a look at this symmetry. What I've done here is plot what it looks like for Lx equals Ly. And to make it more interesting, I picked Nx equals 2 and Ny equals 3. And then we'll compare that with Ny equals 2 and Nx equals 3. If we plot them as contour plots where light color is the wave function pointing up and dark color is the wave function pointing down, you can see that when Ny is 3, we've got 3 lobes up, down, up in the Y direction. And when Nx is equal to 2, we've got 2 lobes, which is just down, up on the other way. And they're multiplying each other. So you get this kind of egg carton pattern of light and dark areas indicating where the wave function is positive and negative. And if we look at the other condition where we swap the quantum numbers, all we do is we change whether there's 3 going this way or 3 going up. And it turns out that that's the same exactly as just taking the whole box and just rotating it, which is a symmetry operation. And so we can see that the fact that they have the same exact energy is a fact that you can take one wave function and you can just grab the whole thing and rotate it in space and just change what you're calling X and Y. It's the same shape. It's going to have the same energy, of course. And this is a very important point is that symmetrical systems always have degeneracy. And whenever you've got energy levels that are close together, that means that perturbation can have a big effect. Because now anything that changes the length of one dimension or does something can make one of them slightly lower than the other one. And oftentimes that happens. So in other cases, there may be individual tiles rather than the whole thing rotating, for example. I could have a situation where subsections of it rotate like gears and make a new pattern. And that would still have the same energy or could. So it needn't be such a simple thing as just overall thing, but it could be some more complex relationship between the things. And you can explore the situation, for example, in which one dimension is twice the other one. And you could have different situations. And in 3D, of course, because we have this extra term, NZ squared over LZ squared, now the possibility for degeneracy is higher because now we have a third dimension and if any of them are equal or there are multiples or various other conditions, we could have energy levels that are close to the same energy. And so there are more chances for degeneracy. So in closing, if there's any perturbation to the quantum system, it will usually lift the degeneracy. And what happens is if we have two energy levels that have the same energy, but we only have enough particles to fill them up halfway, what may happen is that something may change so that one energy level is lower than the other. And then both the particles, let's say two electrons, go into the lower state and that spontaneous symmetry breaking can cause some complexes to get distorted so they change from their ideal shape. And that's a very interesting area of study, for example, in inorganic chemistry of some complexes. And the reason why that happens is just because there's a difference in the number of particles versus the number of energy levels that are available. And so, of course, it may try to adjust, move slightly, change the length of the box, so to speak, so that one drops down and then that one's occupied. And once that happens, that's the lower energy state and it just stays like that. I'll close there. And next time what we're going to do is do more realistic two-dimensional problems. We're going to start with a pseudo one-dimensional problem, which will be kind of interesting, the particle on a ring. We mentioned that when we were talking about the de Broglie wavelength, but I want to go back to it. And then a two-dimensional problem, which is a subset of a three-dimensional problem, which is the particle on a sphere. And these will be our stepping stones to get up to understanding how atomic orbitals are formed in atoms like hydrogen. So we'll do that next time around.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:01:02 Odd, or Antisymmetric, Funcitons 0:05:57 The Morse Potential 0:18:05 The 6 - 12 Potential 0:27:28 Quantum Systems in 2D and 3D 0:28:41 Particles in a 2D Box 0:39:49 Quantum Dots 0:42:40 Degeneracy 0:44:40 Particle on a Ring
10.5446/18885 (DOI)
Welcome back to Chem 131A. Where we last left our hero, we had decided that it was possible for a light particle to tunnel through a forbidden region, much like a high jumper, going over a bar, but not going over it, just appearing on the other side, collecting the trophy, without having enough energy to actually go over a phenomenon that we call tunneling to indicate that we went through, but we did not go over. Today, what we're going to talk about is we're going to talk about tunneling microscopy, and that's an application that turns out to be very, very interesting for a lot of reasons, and then we're going to introduce a little more complex problem on vibrations. The reason why vibrations are a more complex problem is that the potential energy for a vibrational problem is not a square well or something that's mathematically so easy. It's trickier because we get x squared in there. We're going to see how we have to handle that. It certainly seems at first blush that this phenomenon of quantum mechanical tunneling is just a small niche field for experts and people in ivory towers to study, but just like a lot of basic research, it oftentimes leads to killer applications, and this is very, very true in this case. We saw, for example, that the phenomenon that when you measure something, you cause it to change, allowed us to do quantum cryptography so that we could have this key that we could tell if somebody was spying on us and we could establish an unbreakable code between us. Likewise, this phenomenon of tunneling, because when the barrier is big, it depends exponentially on the distance, and that means it's very sensitive to it in some sense. That means that something that's close tends to dominate everything, and that lets us have a trick to make a very sharp looking point. And one of these applications then of tunneling is called scanning tunneling, the scanning tunneling microscope or the STM, sometimes called scanning probe. It was Gerhard Binig and Heinrich Rohrer when they were working at IBM. They were granted a patent on the scanning tunneling microscope in 1982 while they were at IBM, and I've given you a reference here on Google. They have a list basically of all the patents that have ever been granted. They're public knowledge. You can search them. You can find out. And boy, are there a lot of them. And this patent was 4,343,993, and that was back in 1982. If you take a sharpened metal tip, and I mean really sharp, as sharp as you can make it, but as we'll see, it may not matter how sharp you make it, because when you look closely, it's going to be extremely sharp no matter what. If you're fairly lucky. And you bring it up to a clean surface like a gold surface. And you put a voltage on the tip. Then there's nothing between the tip and the surface except a vacuum. And the way we interpret that is that the electron can't come off from the gold atom too far, because it's a very large electron, and its energy, its potential is getting too high as it goes away from the atom. The atom has a big positive nuclear charge that's pulling the atom down. That's holding it down. And therefore crossing through this region of space is like jumping through that region V in our tunneling problem. It, classically, it should not happen. We need to have a conductor to have a current with electrons. But if the electrons are wave, then if we get near the surface of the gold or something else, then the fact that the wave function can sneak out a little bit means if it can sneak into the tip, then there's some chance that the electron materializes in the tip. And that gives us a current. And this current is going to be some noisy thing, because it occurs because of tunneling, but it's going to be a current, and it's going to depend if we move the tip closer. It's going to get much, much bigger, because as the barrier gets thinner, the tunneling gets exponentially more likely. And what that means is that the position of this tip floating over the surface is extremely sensitive to the distance. How can you use that to do something? Well, this is not a microscope in the conventional sense of a light microscope where you might look at a hair or look at cells. This is only for looking at the topography of a surface, but it can be fantastic because you can put pieces of DNA on a surface or something like that, and you can use variants, the atomic force microscope, for example, to look at these things and see all kinds of things. The amount of current that you get is a measure of the local density of states. In other words, it has to do with how many electrons can be there and how their wave functions are on the surface and the distance of the tip to the actual surface itself. And this works with a conducting surface. If we raster the tip over the surface by moving it back and forth, and we keep track of where we are with a little system of piezoelectrics that's just like a GPS for your car, it knows exactly where the thing is, then what we can do is if the current gets too big, we assume we're too close to the surface and we pull the tip up, and we try to keep the current at a certain value. And what we keep track of as a function of the value of the current trying to keep it locked on to a certain value that's convenient, that means we're close enough to get something, we know we aren't too far away, we aren't getting anything, but we're far enough away that we can move reasonably. The problem is if we're too close and we come along with the tip and there's a mesa or something on the surface and we come along, then it'll increase, it'll increase, it'll increase, but then we'll crash the tip into the surface and then we put a notch in the surface sort of like scratching an LP in the old days if you were careless. And we may change the tip because the tip has atoms on it and they may get knocked off and they get knocked around. We have to strike a compromise, we want to get a current that we can tell that we're in contact, we aren't actually touching, but we're in tunneling contact with the surface, but we don't want it to be so high that if we move too quickly that we're likely to crash the tip. In the early days of doing these experiments, people crashed the tip into the surface all the time. Now there are commercial kinds of machines that even people have made STMs on their own avid hobbyists. You can actually make this device, so it's not so difficult to make. Because of the exponential dependence then, let's just imagine the tip. The tip, whatever it is, has been sharpened. It's very sharp, it's as sharp as you can make a sharp thing. And why do you want it to be so sharp? Because you want to be able to see little egg carton things of atoms and things on the surface. And so your tip should be very sharp. If your tip is this big wide thing, then you kind of blur everything together and you can't see anything. But it turns out because of the exponential dependence, if there's a cluster of grapes hanging off the tip of atoms that are piled up there, and it could be anywhere, it doesn't have to be in the middle of the tip. But whatever is closest, all the current is going to go through there. And so that's perfect because it means that you just get lucky with the tip and then all the current goes through there. And the tip sort of makes itself sharper by the way the current depends on the proximity. And so unless you're very unlucky, unlucky would be two tips coming down, two bunches of grapes, both the same size, that would give you a very confusing image. That doesn't happen very likely. The other ones contribute exponentially less current. And that means that they don't bother you much. You can experiment with different tips, and people did a lot of that until you find a good one. And then you conduct experiments with this good tip. You try to get your PhD with it if you can until you're unlucky and you're too aggressive or something and you crash the tip into the surface. And of course, most of the time when you're looking at things with the STM, you're looking at things that you kind of assume are pretty smooth. You're trying to look at details of a flat surface of atoms. You're trying to see for example, if you have a material, whether some of the atoms might, if you have two different kinds of atoms, some of the atoms might like to be on the surface of the material in a different amount than in the bulk. And you can see that kind of behavior with this. And you can also use the STM to do chemical reactions. If you have a surface of some molecules and you bring down your tip and now you can influence, you've got current going through, what you can do is you can do a pulse. Sort of like having a meteor come in and hit and you can make a small chemical reaction at that point and you can write a little dot there. For example, and then you can move the tip and pulse it again and you can modify the surface by doing this over and over. And there was interest in that at one point. Here's a figure from the patent. You can see back in 1982, you drew things by hand. And you drew a lot of things by hand. In fact, you used something called a Leroy set back then to get the numbers to look nice and so forth. And you drew them in India ink. This is from the original scan of the original document. Here they're showing this tip. They're showing a flat surface. They're showing Z, which is the up and down. They're showing X and Y, which they're controlling with these piezoelectric controllers. They're showing a plot of what they get. And they're showing a screen, which is going to show the topography, what the current did as a function of Z. And here's a figure 3 from the same patent. What you can see here is that they're showing the current as a function of the distance from the tip from the surface. And they're drawing an exponential, which is exactly what we derive for tunneling. And they're showing that the current should go like E to the minus something. And then you can see that there's a distance in the exponential. That's the exponential distance dependence that we derived for a very unlikely event. And you want this to be unlikely. You don't want it to be too likely. And here is a schematic again, figure 4 from their patent. There's a surface. There's a tip. The tip is very near the surface. The tip has some shape. And all the current is coming from the part of the tip that's closest to the surface. So that part of the tip is a super sharp part that's giving all the current. And the other part is giving a little current, which is kind of a noisy background, but it's not a big deal. And then you're going to move the tip along. And you're going to keep the current constant. And by keeping track of how you have to adjust the height of the tip to keep the current constant, you get a picture of the lay of the land. The tip can also be used to actually pick up atoms and move them around. And here's a spectacular example of that, which Don Eigler published in Nature in 1990. And again, working at IBM, here you have a very, very flat atomic precision nickel surface. And on it are scattered some xenon atoms. Xenon atoms have a lot of electron density. And they show up there as these round things, just like you might imagine a xenon atom should look. And you can see in frame A, they're all randomly positioned. And what they were able to do is first use the microscope to see where everything is. Be quite careful. And then go to an exact position where you know there isn't any atom, and of course, this has to be extremely cold. This is basically at liquid helium temperature, because if this is at room temperature or even liquid nitrogen temperature, it's going to be like drops of water on a fry pan. They'll be moving all over the place, and it won't matter that you scan through and see where they are, because the next time you come through, they're going to be somewhere else. And half the time, they'll just pop off the surface entirely. There won't be enough sticking for them to stay on. So this is extremely, extremely cold, which is also a challenge, because you have to cool your microscope, you have to cool the surface, you have to cool everything down, you have to be extremely careful. And then you come down with the tip on top. You know it's still there because it's so cold. And you actually push on it, and then you drag it somewhere. And with your XY Magical GPS system there, you park it, and then you go get another atom, and you drag it, and another one, and you drag it. And then in between times, you then image the surface, very gently, so that you are not dragging any atoms, and then you can see your progress. And what they show here is they can write out IBM in xenon atoms on a nickel surface using the STM. And this was really just a spectacular example of how you can manipulate the very smallest things with such exquisite detail using this device. Unfortunately, I think nobody has figured out how to use the device to make extremely small things like computer chips or other things that might be ultra, ultra, ultra miniaturized because it's too slow, it takes too long, and it has limited ability to make any kind of 3D shapes. Here's another image. This is from a group at Carnegie Mellon in the physics department. And what this is, is this is a picture of the surface of silicon, and the 1, 1, 1 just means that it's a certain crystal plane. And so if I take silicon, it has a certain crystalline structure. I can cut at certain angles, just like cutting a diamond, and I cut at certain angles, and I would expect to have certain kinds of atomic patterns. But what tends to happen once I cut is that the atoms are very unhappy if they're sticking out too far. They're very unhappy because they don't have enough bonding neighbors, so-called dangling bonds, bonds that are going out into space doing nothing. And what they may decide to do if they're unhappy enough, it's sort of like a lonely person going to a bar, they may pull in and try to make extra bonds with other atoms, which has nothing to do with the original structure that you would expect, and that's called reconstruction. And here you can see this so-called 5 by 5 reconstruction of silicon, and you can even see there's one defect in the middle of the picture where there's an atom that's kind of dislocated, that's out of position, but most of them seem very perfect, so very interesting. Of course, the color here is false, the color here is just to guide the eye. All right, now let's go on to vibrations. Vibrations are important because when we do a chemical reaction, we take a chemical bond and we usually break it. We break it and we make a new bond, and the whole business of chemistry is to take stuff where things are organized, they're the same atoms, but they're worthless, just junk, manure. And then we make and break bonds, the little witchcraft and outcomes, some very, very important antibiotic, which is worth a lot more money. And in order to understand how that works in detail, we need to have a very good idea of how strong the bonds are. We need to be able to predict if we're going to make something, if it's going to have strong bonds or weak bonds, and we need to understand also if it's going to absorb light so that we can do an assay to see if we've made what we think we've made like IR spectroscopy. Here what I've shown is a potential energy curve for the hydrogen molecule, H2, the simplest molecule. The proton distance is along the x-axis in picometers, and the energy, the electronic energy, so we calculate this curve by moving the protons together at different distances, and then we freeze them even though we know they can't be frozen by the uncertainty principle, and we calculate the electronic energy. And if the protons are too close, they tend to repel each other, plus the electrons, the orbitals are too close together, it's not optimum. If they're at the right distance, then the electrons can be in between. Each proton sees both electrons as part of the principle of bonding as we'll see is that they share. Each proton, each hydrogen thinks it's a helium atom, and that's a very stable configuration. And then as we tend to pull the protons apart, the electron clouds can't overlap, this proton cannot see anything to do with this electron, and so the strength of interaction decreases, and finally when they're far apart, they're just two hydrogen atoms. And that's shown then in this so-called potential energy curve, which is just the electronic energy plotted as a function of the frozen distance of the two protons. When they're too close, you see that the electronic energy is above zero. That means that when they're that close, that they're less, they're more unstable than just two hydrogen atoms apart. But there is a well, there is a position where the two hydrogen atoms working together are much more stable than two hydrogen atoms apart, and that's the stable H2 molecule. And then the potential curve goes back to zero, as they go back toward just two isolated hydrogen atoms. Now near the bottom of the well, at the equilibrium distance, which was 74 picometers in that previous figure, the potential has a minimum. And where the potential has a minimum, calculus tells us that the slope must be zero. And that means that if we expand the potential V of R, that curve, whatever the form of it is, in a Taylor series, which we do by taking the function value and then the derivative and second derivative and so on, around the equilibrium position, we can write V of R is equal to V of R E, which is the equilibrium, the lowest point, plus R minus R E. So again, a trick of rearranging something by making it seem more complicated. And we can write that as V of R E plus delta R, where delta R is how much the bond is stretched or compressed from the equilibrium. What we get is in the Taylor series, we get V of R at evaluated at R E, which we'll just call V of R E, plus the derivative of V of R evaluated at R E times delta R, plus 1 over 2 factorial times the second derivative of V of R evaluated at R E times delta R squared plus blah, blah, blah, keeps on going, same pattern. If we look at the bottom of the well, the derivative is zero. And therefore, it simplifies quite a bit. Near the bottom of the well, the derivative is zero right there, so we throw that term away because we evaluate that term right at R equals R E. If we're near the bottom of the well, then R is close to R E. So what we will assume then is that R minus R E squared is something, but R minus R E cubed and all the higher ones are too small because R is very close to R E, and so therefore, they're much smaller. And if we do that, we end up with this following very simple form for the potential, which is what we're going to use and when we do the Schrodinger equation because we want, we don't want to use the real potential or we'll never get out alive. It'll be far too difficult for us to do. We get V of R E plus 1 half K times R minus R E quantity squared, where K is the second derivative of V with respect to R evaluated at R equals R E. And K here is the, is the force constant, not the, so I apologize for using K again, but K is conventionally the force constant of the spring and before K was the wave vector, E to the I K X, this is a different K and there's another K Boltzmann's constant which you may put K sub B to try to keep that one separate, but we'll use K when we talk about vibrations to mean the force constant of a spring. And for small displacements then we have that the motion should be harmonic unless K is zero. If K happens to be zero, then that term goes away and then the whole motion is described by something very funny, whichever terms are left over. But usually K is not zero because the thing comes down and goes back up and so it has some part of it that's quadratic and around the bottom that's going to be the main part of the actual potential. Therefore we can model a chemical bond as a one dimensional harmonic oscillator. We totally ignore any kinds of other displacements or in other directions or anything funny and we just say, look, these are two things on a line here. There's a distance between them. We know the energy. What we want to figure out is what's the wave function as a function of this distance between them given the form of the potential energy. We can always adjust the energy zero. So we can call the bottom of the well zero even though it's not, even though for hydrogen it's minus 4.4 and a half EV down. We can call it zero and then we can just add it later if we want to get the real energy. So we don't have to worry about that in the math and so we'll call it zero when the displacement is zero and just to keep in keeping with what we've been done for consistency rather than using r minus re I'll just introduce a variable x and psi will be a function of x and so if x is zero then they're at the equilibrium and if x is something else plus or minus then it's away from equilibrium. We get the same tired old time independent Schrodinger equation to solve minus h bar squared over 2m d squared psi dx squared. Now plus 1 half kx squared psi is equal to e psi and given k and m our task is to figure out what the allowed values of e are and what the functional form of psi is. For the particle in the box the allowed values of e went like n squared and psi was a sine wave. Now we've got a different potential completely. It's got this x squared so it keeps continuously changing so we could guess it's going to be quite a bit harder to do it and that would be a very good guess as we'll see. If we have the two nuclei connected together let me just remark that the mass m here is the mass, the reduced mass of the oscillator or m1 m2 over m1 plus m2 but we won't worry for what we're doing. We just want to get a qualitative feel so we'll just keep m and m is some mass associated with the oscillator. Now before we solve the differential equation it's a good idea to take a second and try to figure out what it is we would predict that we should see. That way if we get something ridiculous because we make a mistake we'll know it. So the question is what properties should the wave function have and that's pretty easy to suss out. First the energy will be quantized. Why? Because the potential is going like that and something that's going like that is tending to confine the particle. The particle cannot just go anywhere because the potential gets bigger and bigger and bigger and bigger so it's going to be trapped. If it's trapped it's got to be quantized. It's got to go way out there in the energy. It has to fit into the space and therefore it's going to be quantized. We don't know what shape it's going to have. Secondly the lowest energy eigenstate can't be zero energy. It wasn't zero energy for the particle in a box either. The problem with that is if we picked zero there it went away and we have a similar problem here. If we want to have a real wave function it's going to have to be in there. It's going to have to satisfy the uncertainty principle and therefore it's going to have to have nonzero p squared and it has nonzero x squared because it's not an infinitely narrow box and therefore it's going to have nonzero energy. And thirdly the wave function has to die away somehow as x gets far from zero. The reason why there is that the potential keeps getting larger and also it has to die away to zero because it has to be normalizable. And finally the ground state should have no nodes and the reason for that is that we buy analogy with the particle on a box when we did the particle on a box it was zero at the edge because it had to be but it wasn't zero anywhere in between. It was just a lump. And if we change this to this we expect this lump to change but we don't expect it to change into two lumps. Therefore we expect something like a turtle in there somehow sitting there not very exciting but just sitting there. And if we have an excited state, if there are excited states in this potential we would expect them to have nodes just like the higher excited states of the particle in a box. But well how should we solve the differential equation? Remember I said the most powerful method to solve differential equations is often guessing and I'm going to try to guess. I have something times the derivative of the wave function twice plus some other stuff is equal to the wave function times a number. I've got to get the other stuff to go away. Therefore I need to have a function that generates itself again that's so I can get this part and I need it to generate a little other garbage times itself so that I can get rid of the one half kx squared which I want to go away because there's no one half kx squared on the right hand side. And I couldn't use just e to the x to do that because e to the x will give itself times a number. But I could use e to the something else and I would expect that I'm going to have to use an exponential function because there are always the solutions of these differential equations. Furthermore I could guess that look this thing has symmetry if I draw the potential at the bottom x is zero and then it's going up and it's symmetrical and therefore the wave function has to be symmetrical too and that means that we can't have anything like e to the minus alpha x or something like that because that's not symmetrical around zero. We could have that plus e to the plus alpha x but we can see right away that neither of those would be any good. Since those don't look good you try the next power up and if you try the next power up which is a Gaussian function you get very lucky and in fact it seems to work. So let's guess. Let's guess psi of x is equal to a times the exponential function of minus a times x squared little a. Big a is the normalization constant. We won't worry about what that is at this time and little a is something that we're going to have to pick to make it work and we'll see what the condition is. You wouldn't necessarily know that this would work but you can easily work it out so let's go ahead and work it out. If we take we want to take the second derivative and multiply by h bar squared over 2m. The first derivative of that function is that function again times the derivative remember the derivative of e to the u is e to the u du dx. We have minus a x squared as u the derivative of that is minus 2ax so the first derivative is a times exponential minus a x squared times minus 2ax that whole thing. The second derivative now we've got a product of two things. We've got the derivative of u times v and that is the derivative of u times v plus the derivative of v times u. And the first one we've done the derivative before so therefore the derivative the second derivative is capital A e to the minus a x squared times minus 2a plus a times e to the minus x squared times minus 2ax times minus 2ax again. The first 2ax comes from du dx the second minus 2ax comes from the fact that that second one is there. And we can then put these together and we see that we got what we want we got a times e to the minus a x squared so that gave the same thing reproduced and then we have this term and it has two parts. It has a 4a squared x squared which if we pick a right it's going to cancel out the 1 1⁄2kx squared and then it's got the other part 2a which is going to have something to do with the energy. Let's have a look. If we put everything in then into the Schrodinger equation we come to the following conclusion minus h bar squared over 2m capital A e to the minus a x squared times 4a squared x squared minus 2a plus 1⁄2k squared again capital A e to the minus a x squared is equal to e times the same thing. And so now we can divide both sides by capital A e to the minus a x squared and get a relationship between little a and k. If we want those terms to cancel because there's no x squared term on the right hand side it's just e. The terms in x squared cancel that means that minus h bar squared over 2m times 4a squared x squared plus 1⁄2kx squared is equal to 0. And for that to be equal to 0 for all values of x a little a has to equal to the square root of mk upon 2h bar. And a therefore the exponential argument has to do with the mass and the spring constant and planks constant. And boy is that sweet because this is quantum mechanics and that's exactly the kind of behavior that we would have expected to see. And we can make a connection with the classical oscillator. If you've done the classical oscillator, if you haven't then you should. But the angular frequency of the classical oscillator, what's the angular frequency? Well, if I see this thing going back and forth like that I can interpret it as something as a projection of something going around. Because if something's going around it's going like this and that angular frequency is the square root of k over m. And since omega the angular frequency is the square root of k over m the square root of mk is equal to m times omega. And using that we can now figure out the energy. The energy is minus h bar squared over 2m times minus 2a. That's the only part that's left over because the x is gone by our choice of a. And that's h bar squared over m square root of 2mk over 2h. And that's h bar over 2m times m omega, the m's cancel. And we get h bar omega over 2. That is the ground state energy of the oscillator which is not zero. The oscillator is always a little bit excited. It can't be zero because of the uncertainty principle. And we can make a connection between what we found with light by saying look, omega the angular frequency is 2 pi new, new as the regular frequency. So h bar omega over 2 is also equal to h new over 2. And h new was the quantization of a photon. So this is similar to the quantization of light except that now there's a factor of 2 in the denominator for the energy. But other than that it's very closely related. Knowing the value of a now, little a, then we can calculate and normalize the entire wave function and determine the value of big a. Because now we know how wide this Gaussian function is. We know what area it's going to have when we integrate it. And so we know how big this big a has to be to make the probability of finding the displacement somewhere equal to unity. And our normalization condition now is that we should take the square integral of this thing from minus infinity to infinity as usual. And the integral of e to the minus 2ax squared dx, that's a standard integral which you can look up. That's the square root of pi over 2 divided by the square root of a. And capital A squared times that should be equal to 1. And therefore if we take the square root of both sides we find that a squared should be 2a divided by pi to the 1 fourth power. And that's capital A. We've picked a as usual to have real phase because we always like a having real phase. We're just biased toward that. Just as we did with the particle in a box wave functions. We had that 2ia and we said well we'll get rid of the i because it doesn't matter. This therefore gives us our very final form for the ground state of the harmonic oscillator. And that is 2a over pi to the 1 fourth e to the minus ax squared. And then I can put in all the things with m and omega for a. And I get this very nice formula. Now the problem with this approach of guessing is that whatever you don't guess doesn't turn up. And we guessed one thing and we found one thing. And it made sense why because it's a Gaussian. It's like a big turtle. It has no nodes. It has a low energy which satisfies the uncertainty principle but it doesn't have any more energy than that. But now what we would have to do is we would have to try to guess some higher thing. It's not like the particle in a box where we had n pi. We don't have any n yet here in this problem. We just have this one wave function and this one solution. We suspect and we're right that because it's like a particle in a box just slightly different that there should be a ladder of states. And in fact the ladder is actually equally spaced at n plus 1 half times h bar omega. Which makes this potential really unique because it's the only form where you get an equally spaced ladder of states all the way up to infinity. And there are some very, very nice mathematical ways of attacking that problem that are very beautiful but they take us a little bit too far field for our course and they don't have that much to do with chemistry. They have more to do with operator algebra and quantum physics than chemistry. And so we're not going to explore those but we'll just quote that the general solution is some polynomial in x that you have to pick carefully times e to the minus a x squared. So that part of it is always the same. Now what is the interpretation of this? It's completely different than a classical oscillator because the classical oscillator has the highest probability of we take a film of it. It goes out, it stretches, it stops, it turns around, come back through, that's where x is zero, that's where it's perfect, compresses, stops, back through again, back through again and so on. And the chance of finding it if it's oscillating right at the equilibrium position is not likely. If you just grab it at some point and measure its distance apart, it's much more likely to be either fully stretched because it has to turn around there, that's why it's called the turning point, or fully compressed. But when we look at the ground state of the harmonic oscillator, what we find is something completely different. The highest chance is that the thing is in the middle in the perfect position where it shouldn't be. So it seems as if it's trying to stay in the middle, but it's actually spreading out a little bit, not because it wants to, but because it has to, because of the uncertainty principle. And therefore, this ground state of the harmonic oscillator looks completely different than a classical oscillator. And that caused a lot of consternation, I think, in the very early days because it looked so different and how do you interpret this thing? You want to make sure you haven't made a mistake or something's not right in the equations. But it turns out this agrees exactly with what we observe. And there are many experiments where we take a molecule and we excite an electron and it really looks like it comes mostly from the equilibrium position. We very rarely find something coming from an extended position. So this interpretation of this is correct. Like I said, the oscillator can't sit still. It's like a small kid. It has to squirm around to satisfy the uncertainty principle. And the zero point energy depends on the square root of the spring constant K and on the inverse square root of the mass. And this gives rise to something called the isotope effect. Let's have a look at the isotope effect. Suppose we have two isotopes. For example, hydrogen has a single proton. Deuterium is a single proton plus a neutron. It's heavier but the charge is the same. And deuterium behaves much the same way as hydrogen does. There's D2O. You can use it to make NMR samples, for example. The neutrons don't have any charge, no electric charge. So the electrons don't really care too much about the neutrons. And therefore what we expect to a first approximation is that the force constant, which has to do with the electronic orbitals and the repulsion of the two protons doesn't depend on whether they're two protons or two deuterons. It depends on the charge and the separation. And the electrons pretty much don't care. There are small effects because the electrons sneaky, it can see the neutron. There's some small effects but basically the form of the potential energy surface is the same. But what's different then is the mass. And therefore this whole field of isotope effect, isotope effect is a great field to get into if you're interested in theoretical effects of zero point energy and tunneling and all these subtle things. Because you have a perfect thing where everything's the same except the mass. And so your calculation even doesn't have to be quite so good because even the things in your calculation that are slightly wrong are the same except for the mass. And so unless you're very unlucky, you get a pretty good result anyway. If we have then a Cd bond, a carbon deuterium bond versus a carbon hydrogen bond, the carbon deuterium bond is stronger because it, in the sense that the amount of energy it takes to go from the ground vibrational state which is as close to the bottom of the well as you get to where the bond is broken which is the same place on the curve is higher for the Cd. And here's, then we'll close with this, here's how it might look. On the left we have a molecule that could either have a hydrogen or a deuterium and let's say it's carbon. And then in the chemical reaction this bond breaks and as it's breaking the force constant gets less. So in the transition state it's almost off and therefore the potential is very wide because the force constant for something that is a weak bond is not a very stiff spring. But when it is actually the reactant it is stiff. And therefore there's a big difference when it's the reactant and there's a small difference when it's the transition state. And that then translates into a different rate of reaction. That means that if we have molecules that could be Cd or CH and we react them somehow we expect the CH ones to react more quickly than the Cd ones. And this is called the kinetic isotope effect. And I've adapted this little figure here from Wikipedia to just show how this works. In fact this again seems like a very esoteric thing but you would be amazed if you go to the enzymology literature and you look how do I know if this enzyme is breaking this bond or that bond or what is the rate limiting step for the synthesis of cholesterol or the removal of something from the body. You put in a deuterium as a spy and you look for the deuterated product versus the protonated product and boy do you find a ton of information. So this is now translated all the way from this esoteric effect of the harmonic oscillator, the uncertainty principle and zero point energy all the way to modern medicine where it's used to figure out what's going on in the body. Next time we'll continue on with some of these one dimensional model problems and then we'll begin to actually do some multi-dimensional quantum mechanical problems. I'll close it there.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:03:27 STM 0:19:19 Vibrations 0:46:33 Zero-Point Energy 0:49:19 Isotope Effects
10.5446/18882 (DOI)
Welcome back to the fourth lecture of Chem Chemistry 131A. Today we're going to talk about complementarity, quantum encryption, and the Schrodinger equation. First of all, suppose that we actually know something about a particle but not everything. So we know that a particle may be localized or in the vicinity of some region of space and we'll stick with one dimensional problems for simplicity here. So we have a variable x and we know that the particle is probably at a position around x naught. The question is what should the wave function look like for a particle like that? And if we assume a real wave function, we can write something that creates a peak near this particular point x naught in terms of a Gaussian function, a function that's small and comes up smoothly and down. And that's a very nice function, has a very simple analytical form which I've written here. And it's characterized by a position where the peak is, which in this case is x naught. And by a standard deviation which has to do with how wide the distribution is around the position x naught. In other words, how peaked the wave function is and how closely we know the position of the particle. So sigma in this formula measures the width of the peak distribution and a small value of sigma means that the wave function is peaked more strongly. It's really very tall and a large value of sigma means that the wave function, while it has the same average value, where the peak is, it's much wider in fact. I've plotted here the wave functions for sigma equals 2 and sigma equals 1. And both the wave functions when you square them and integrate them have unit probability that the particle is somewhere in the universe. But you can see in terms of this graph that it's very likely that the particle is quite near x naught. That's the most likely place. And if sigma is 2, there's some likelihood that the particle could be one or two units away. And then by 4, it dies off quite quickly. But for sigma equals 1, there's 99% chance that the particle is going to be within two units of the position x naught that we think the particle is located. And so the particle is much more localized for sigma equals 1. And this lets us have a tunable parameter. We can use the same formula. We can put in different values of sigma, get a family of wave functions, and then we can analyze how they behave as we move forward and try to discern things about the momentum or the uncertainty in the position of the particle. Now, we previously worked out that the momentum operator was minus ih bar d by dx. And we found the eigenfunctions for the momentum operator. They're the complex exponentials. And I mentioned that they corkscrew one way or corkscrew another way depending on whether the momentum is positive and the particle is moving to the positive x direction or the particle could have negative momentum and be moving in the negative x direction. But the size of this corkscrew does not change in space. It's completely uniform everywhere. And so in a momentum eigenfunction, we know absolutely nothing about the position. The question is, if we know something about the position but not perfect knowledge, we have a distribution like this Gaussian function, then how much, if anything, do we know about the momentum and are the two things related to each other? And to find that out, what we have to do is we have to figure out how to write this smooth Gaussian function and position as a linear combination of momentum eigenfunctions. And then the coefficients of those momentum eigenfunctions will tell us, what is the chance if we make a measurement of momentum rather than position, but we will get a certain value for the momentum eigenvalue. Of course, if we make a measurement of momentum, we will have changed the wave function in a very fundamental way. And so it will not have the same distribution. And it may not be located anywhere near x naught after that. It turns out that you can kind of see what's going to happen just by imagining this thin wave function. If I have a thin wave function and I have a bunch of corkscrews, let's just forget about the imaginary part for the time being and just plot the real part. If I have a bunch of functions that are going up and down and up and down, and I want to make something that's quite narrow and then pretty much zero outside, I can't just use a very long wavelength because that will never actually be very tight. And I can't just use one wave because that goes everywhere, that's never going to be zero. So what I have to do in order to make it work is I have to take a bunch of different momentum eigenfunctions and they have to at least oscillate as fast as this thing is dropping. In other words, if I have a sine wave, it has to drop at least that fast. And then what has to happen outside is they have to all kind of interfere. They're all still there, but when you add them up, they cancel out to zero, which is of course the beauty of waves and which is what light does all the time. It may take many paths, but many of them cancel to zero. And so what we anticipate in this analysis is that the more localized the wave function is in position, the larger the spread of momentum eigenfunctions we're going to have to use because we're going to have to use something that oscillates quickly. And that means it has a big value of P and that means the total spread can be quite wide. And so what that means if we measure the momentum is that we're going to get a wider distribution of momentum eigenvalues. So here are a couple of momentum eigenfunctions that contribute to this sigma equals one wave function. And you can see from the graph that the amounts of these momentum eigenfunctions are all small. The P equals zero is a flat, that's zero momentum, that's just flat, no corkscrewing at all. P equals one is a very lazy thing that's barely changing. And P equals 10 is going quite quickly. But because the function dies at around two, P equals 10 is not going to be enough momentum to get a wave that oscillates quickly enough to actually get this peak to be that narrow. And so we anticipate that we're going to need more waves than that. And because all these values are small and because there are a lot of waves, what that means is we get a very wide distribution of momentum functions. And that's because we need this wide distribution to build up this localized position Gaussian, eigen, Gaussian position function. What I've done here in the next slide is show what happens if we take our original Gaussian wave function and we approximate it by 20 momentum eigenvalues. And we choose the coefficient so that we get the best fit between our approximation and the true function. And what you can see is two things are not so good here. The first is that because P equals 20 is not fast enough, we can't make the peak narrow enough. So we can't bring in the skirts of the peak quickly enough because we don't have anything that's dropping that fast. The second thing that's wrong is that we get these wiggles that keep going outside. Now they get smaller and smaller, but they go outside the region that we want. And this is reminiscent of diffraction of light through a hole and many other kinds of problems that are encountered quite often. And it's just a fundamental property of trying to cast this function in terms of these sines and cosines or equivalently in terms of complex exponentials. How can we cure this? Well, instead of using 20 momentum eigenfunctions, suppose I use 50 and I try the same thing, then we get this graph here. And now it's far, far better. Now it pretty much tracks the Gaussian function. There are a few wiggles. We're always going to have a few wiggles unless we use an infinite number of functions. There are a few wiggles outside and the peak doesn't quite, if you look closely, it doesn't quite get all the way up to the top of the Gaussian function in the center. And that means that we're missing a few functions and the functions we're actually missing are the ones that are also creating these wiggles outside. If we put in these extra functions from 50 to infinity and they would be very small amounts toward the end, what we would find is that we could match this Gaussian distribution absolutely perfectly. And we can do that with any set of eigenfunctions. We could use position eigenfunctions. We can use momentum eigenfunctions. We can use eigenfunctions of some other operator. And because the eigenfunctions form a basis, just like any point in a 2D plane can be written as this much x and this much y, and there's no escaping, if we use enough of these eigenfunctions of any operator, we can exactly match any kind of wave function we're going to encounter. And then when we look at the amounts of these basis functions, that's when we find out what a measurement of momentum will give us or a measurement of any other operator. In fact, any reasonable function and wave functions are always reasonable because we have to be able to differentiate them and integrate them and so forth. Any function at all can be cast as a sum of sines and cosines or equivalently as complex exponentials e to the i theta. And that is actually the principle of Fourier series expansion. And that is another good math subject to study so that you understand exactly how this type of thing works. In fact, if you do study that in a proper course in mathematics, what you will come to the conclusion is that the uncertainty principle can be seen from this aspect as a consequence of the position and momentum eigenfunctions being related by a Fourier transform. And that right away gives us the uncertainty principle and makes it quantitative. So if we have a very narrow distribution in position, we have a very wide distribution in momentum, we need those fast momentum eigenfunctions to pull the skirts in. If we have a very narrow distribution in momentum, that means that the corkscrews go way out all over the place before they actually interfere and go away. And that means that we have a very wide distribution in position where the particle can be anywhere. And they're just flip side of each other. But we can't have them both be narrow. It's not possible to use one corkscrew and make one spike in position because they're totally different things. When one is narrow, the other is wide. Now, this measurement, as I said, if we actually measure the momentum or we make a measurement on this Gaussian wave function, we will have changed it because it is not an eigenfunction of what we're measuring. And with all the worry about security and privacy, I thought I would do a little bit of a topic here called quantum cryptography. Using quantum mechanics, it's possible to make an unbreakable code that nobody can spy on you. And in fact, it's a very simple and ingenious thing. It's being used now in some places in the United States and in other countries. And it's based on the observation that if we make a measurement that the wave function must fall into an eigenfunction of the measured variable. And that means if a spy makes a measurement on something that we're transmitting, the spy will influence the data and we can pick that up. So we can know something's wrong. We can know somebody's spying as well. And we can then just simply stop talking, for example. So how does this work? Well, we need some sort of thing to send, some particle. And the easiest particle to send is a photon. We can send a photon down a fiber optic line. We can send a photon through free space. Lasers are very efficient at making photons that have very nice properties, the best properties available, very narrow wavelength range and so forth and so on. And we can send lots of photons one at a time very quickly. And that allows us to send a lot of information and to establish a method of communication. How could you do this? Well, the first realization of this scheme was first put forward as far as I know by Bennett and Brassard, I've given the reference here in slide 107 in 1984. So quantum cryptography has not existed for all that long. And this scheme is called BB84 after the two authors and the year in which it was invented. There are other schemes as well because once you realize how to do it, there's a lot of ways to skin a cat as they say. But I just want to talk about BB84 and show you how it is possible to establish a communication link and an encryption strategy where nobody can find out what you're doing. In this field, just like in quantum mechanics, we have psi and we always use psi for an unknown wave function that we're going to find out about and we tend to use phi for basis functions. In this field of communication, the two parties who are trying to communicate are always Alice and Bob by convention. And Alice is trying to send data to Bob and they want to encrypt their data and they don't want anybody else to know what the encryption key is so they can't decrypt the data. And the spy is traditionally referred to by the name Eve which is perfect because Eve is an eavesdropper. And how do we do this? Well, I've shown here two possible states of polarization. A polarizer is just a filter and you can think of any measurement in quantum mechanics as some kind of filtration. If I have a photon polarized this way up and down, the electric field is going like that. And I have a polarizer at right angles. The polarizer cuts out that light and I get zero. And if the polarizer is parallel, then the light goes through completely unimpeded. And I get 100%. If I've got one photon, I get the one photon. And if I don't have one, if I have the polarizer the other way, I get zero photons. And that's perfect for computing because I can have this be one and I can have this be zero. The trick here is though not to just use that but at random to pick instead of what I'll call the plus, pick the time spaces. The time spaces is just the plus basis rotated by 45 degrees. Now, if the polarizer is this way, I get a one. And if it's the other way and the photons this way, I get a zero. So I get one and zero again, but I have 45 degree rotation between this basis and this basis. Now, what happens if I have a photon this way and I have my polarizer this way? Well, then quantum mechanics says at random, so I can't predict, I get 50, 50, one and zero. And after I measure it, the photon is now instead of polarized this way, it's polarized either that way or that way. And that's the basis then of a completely unbeatable strategy to establish a key between two anonymous people or two anonymous computers connected up and then use that key to transmit data. Here's what we do then. Alice sends a zero or one and then the other one. And at random. And she also chooses the basis at random. How could you choose the basis at random? Well, one way would be, for example, to have a very, very weak radioactive source nearby. That's completely random. If it's an odd number of counts in a certain number of intervals that the detector gets, you pick the plus basis. And if it's an even number of counts, you pick the times basis. So you pick the basis at random, but you record it where you are, what basis you're picking. But you do not send that information. And so nobody else, not even Bob, knows what basis is being picked. And then you send a photon polarized that way and you can change that very quickly with an electric field and you send photons along to Bob. Now Bob has no idea, because you don't want to be telling what the basis is that defeats the purpose of secure communication, Bob has absolutely no idea at all what basis Alice has picked. Therefore, Bob doesn't try to figure out what basis Alice has picked. Bob at random either picks plus or times as his basis. And then measures the photon that comes through with the polarizer either oriented this way or that way and then measures either a one or a zero. But he has no idea whether what he's measuring is with absolute certainty if his polarizer is the same as Alice's, the theory of quantum mechanics says, if I put a photon this way and I measure this way again and nothing's intervened, then I get the same value with certainty or it could be this way and I'm getting a one at random and I have no idea. We send a whole bunch of photons, lots and lots, and we can do that very, very quickly and very cheaply. You can have something about the size of this remote as with a laser in it, no problem at all. If Bob's basis is quote wrong, in other words, it doesn't match the basis that Alice picked at random, then Bob just gets one or zero at random. But the problem is he doesn't know that, he just gets one or zero. And how is he to know what on earth he's getting? He has no idea what he's getting. But what he does know is he's got a list of what basis he picked at random, whether he picked plus or times. And that's enough. Now Bob doesn't send back what he measured. He measured a one or zero in sequence. But what he does send back to Alice is he sends back a one or a zero depending on whether he picked plus or times as the basis. So he doesn't say what he measured. He just says on trial one I happen to pick plus. On trial two I picked times. On trial three it was times. Four was times. Five was plus and so on. And I have lots of them. And Alice, of course, has a list of what basis she picked, but that information hasn't been sent. And Bob hasn't sent what he's measured. And therefore Alice looks at Bob's choice of basis. And whenever his choice of basis matches what she happened to pick, let's say with a certain trial, they match. Maybe many of them don't match because it is random after all. But if they happen to match then she says okay. Okay. Go ahead and pick these values. So for example she records the list and where they match she then sends him another message which has nothing to do with anything that anybody can use and says why don't you use whatever you measured on the seventh, the 13th, the 22nd and so forth trial or photon. And those are where the basis happened to match. And but nobody knows what the measured result was because Bob never said what he measured. But Alice knows what he measured because she knows that when the basis matched he got the same value that she got if nobody's eavesdropping. Therefore what they do is they establish a sequence of ones and zeros that nobody else can know about. And that's perfect to encrypt data. They both know what the key is. They take normal data, they encrypt it with some scheme with these random sequences that only they know about. And then they decrypt it because each of them has the key but nobody else can have the key. Now how can they find out if something's going on? Well they can establish a key but they can have sent many, many, many more photons where the basis happened to match where Alice was plus and Bob was plus. And what they can do after they've established the key is they can go back and say hey, you know, tell me what you measured on trial 1001, 1003, 10000 and so on. And if those don't match or if they don't match 99.99% of the time then somebody might be listening. So you can have a threshold that if somebody is listening with some kind of polarizer trying to find out whether it's one or zero you can say look it seems that this line of communication is not secure. Maybe we've got an error in the detection system for the photon or something is wrong but we can't use this to talk about the sensitive things we're going to discuss. We have to start over. We scram the whole thing and start over and then go from there. Once you have the encryption key then you just use any old encryption scheme. And the key is that since nobody knew what the key was except Bob and Alice and by the clever way they did it, nobody can find out. They can use a new key every single time they talk. This is not like the pin on an ATM card or a magnetic swipe that everybody can steal. For every single financial transaction you do, you walk up, establish a new key and then encrypt all the financial data with that key that only you and the other side of the party, the bank know about and nobody else can know about it. And every transaction has its own key and so even if somebody tries to steal the data it's not like they can get in with a single password and then look at all your stuff because it's just completely hopeless. And this is of course the power of computers is to do stuff like this to keep things tricky for somebody trying to steal your stuff. Of course the other side is that computers can be used to steal stuff pretty effectively. And so anyway this has been used. It's a secure method and it is being put into practice and maybe one day every time we walk up to an ATM the card will have a little laser, we'll establish this key and then we'll take some money out and it'll be recorded. Okay. That's all I want to say about quantum cryptography but it is an interesting subject simply because it really illustrates the principles of quantum mechanics and how if you're clever. Now it took from 1926 until 1984 for somebody to figure out that you could do this but now it's a very potent way to ensure privacy. How does a wave function evolve in time? That's the question. And the first point you have to make is that there is no operator for time which seems kind of funny. At least it did to me when I was a student because after all we can measure time or we think we can or we can measure differences in time, elapsed time. It seems like something we ought to be able to measure but there is no operator for time. It's not as if for example it's like position or momentum. There is no operator for time and there is no expectation value for time and therefore time in quantum mechanics as we're going to treat it anyway is just a number. It's just a running variable like X is in position for classical mechanics. It doesn't get elevated to a higher level. And you might say well why is that? And the short answer is that if time did have an operator then it wouldn't be Hermitian. And so then if you introduce an operator that's not Hermitian, you have a lot of problems. Well motion, if the easiest way to tell if time is going by is if something is moving. If we see a car rolling by we know time is elapsed. If we see somebody driving off a diving board into a pool we know that some time has elapsed. And motion is related to energy. And energy multiplied by time has the same units as h bar which has the units of action or joules times seconds. And that dimensional analysis gives us a clue because when we had momentum and position we took momentum times position it had the same units as h bar. And now we've got time and energy and we take time times energy and we get the same units as h bar. And so that gives us a clue as to what kind of wave function we might want to try to put in to start doing time dependent phenomena. Well the 1D kinetic energy of a particle is 1 half mv squared or p squared over 2m because remember for a classical particle that's not relativistic p the momentum is just mv. So p squared over 2m is the same as 1 half mv squared. And in elementary courses we always use ke for kinetic energy but in more advanced courses we just use t. That's our notation. And the potential energy which in elementary courses we call pe potential energy we use v. And the potential energy of the particle only depends on its position. And the kinetic energy only depends on its momentum. So the two of them are totally different forms of energy. Of course they can be interconverted and we do that all the time. But potential energy might be like a mass that is at a position and then if we drop it if energy is conserved we can figure out the velocity of the particle when it hits the ground when all the energy has been converted into kinetic energy. The total energy then is e equals t plus v. And to cast this in terms of operators all we have to do is take our variables and we dress them up with hats. And therefore we have e the energy total energy will be p hat squared over 2m. M is again just a variable. It's not an operator just the p. And then v of x. And v of x could be any functional form including just zero. But instead of just x we put x hat. And then we look at this thing and we say energy is conserved over time if we have an isolated system. And therefore this thing is going to stay the same. It could interconvert between one form or another but it can't disappear. We can't get energy from nowhere. And we can't have energy go nowhere into nowhere. If that could ever happen we'd know about it instantly because we would have something that just sat there and ran and boiled water endlessly and didn't need to be plugged into the wall. And that would be very handy but unfortunately it's very impossible as well. If we put in the explicit forms then of these operators the operator x hat when it operates on the wave function just returns the value x. And therefore we can operate with x hat on the wave function and we just get v of x where x is now a variable. It's been turned into a variable. There's no operator left. P on the other hand was minus ih bar times the derivative with respect to x. And I've got p squared. And p squared is p times p. So I put in two of them. And therefore p squared over 2m becomes minus ih bar d by dx times minus ih bar d by dx times 1 over 2m times psi of x. And that whole thing should then equal to e the energy times psi of x. And that should be an equation that says that energy is conserved. And we now have to find the wave function that makes the energy conserved. And that will depend on the potential. Now we can tidy this up remembering that i squared is minus 1 and get minus ih bar squared over 2m the second derivative with respect to x of the wave function plus the potential energy of x times the wave function is equal to e some number with units of energy times the wave function. And this equation is called the one-dimensional time independent Schrodinger equation. This is an equation that says energy is conserved. If you find the wave function that makes this thing true, you will have found the allowed energy. But there's no time in this equation yet. Now it depending on the functional form of v of x, the potential for example for a molecular spring we might have v of x is one half kx squared so that the energy goes up quadratically that's a harmonic oscillator. Depending on what this form of this potential energy is electrostatic energy various kinds of repulsive forces and so on we can put all those in. And if we have something then we get certain wave functions which if the particle is confined, if the potential gets big and the particle is stuck like water in a cup and it has to stay there or an electron on an atom and what we find is that when we solve this equation we can't have any old energy and the reason why we can't have any old energy is that the wave function has to fit. We saw that on the particle on a ring and it's going to be the same no matter how it's trapped. The wave function has to fit into the allowed space and that means it can only have a certain kind of wavelength, not just any old thing. And that means the energies are discrete. The energies get labeled with a quantum number so we have instead of e sub n and we label them with the quantum number. We label them with n. We don't have any time. The question is how do we introduce time? And it's very tricky to think how to introduce time because we don't have any guidance necessarily from classical mechanics about how to do it. And in the case of an isolated atom let's say just sitting there in its lowest energy state it appears to just sit there quote unquote and the electron distribution, the probability distribution of the wave function remains constant. It doesn't fluctuate. And that means that psi star psi remains constant at all times and that's a big constraint then on what the wave function can do over time because that means if we use now the capital wave function, the dressed up one with an explicit function of time in it, that psi star psi at some time t at all values of x has to be the same as psi star psi at some time where we call zero where we start looking. We start the experiment. And that means that there's whatever happens in time to some isolated state like that, it can't be too violent because if it were some strange thing that affected the wave function a lot, moved it around a lot, made extra lumps and stuff, what would happen is we'd notice it. We'd see something changing because the probability of finding the particle and so forth would be different. And when we do the measurement, of course, we've destroyed whatever probability distribution was there but we can do the measurement over and over and over and we can find, look, the chance of finding the electron is like a sphere and it doesn't change. If I wait a minute later, it's the same. And therefore, that means that the most we can do to this wave function psi is multiply it by a phase factor e to the i theta. Why e to the i theta? Well, the length of e to the i theta is 1. It's just in the complex plane and arrow with length 1. If theta is 0, it is 1. If theta is 90 degrees, it's i which still has the same length of 1 then minus 1, minus i and so on. And when I take psi star instead of e to the minus i theta, I get e to the plus i theta. And e to the anything times e to the minus anything is e to the 0 and that's 1. And that means that the probability distribution stays put. So now I have a clue. When I have a state with constant energy like that that's just sitting there, then it must be that all I'm doing in time is multiplying by this thing that has same length. And so what I can imagine is I have a distribution and the distribution could be moving in time somehow. But rather than thinking of it moving like Jell-O and moving around, what I should do is just color it. And it starts out white and then it turns gray and then it turns black or maybe it starts out red and then goes through the rainbow over time. But its shape doesn't change. And the only thing we can measure is not the color but just the shape when we make the measurement. Now how could we get a phase factor e to the i theta? Well we know that anything in an exponential can't have any units. And we think it should depend on time. So we could put in the exponential e to the minus i times something, let's call it epsilon, times time. And then because we got p times x, let's try epsilon times time. And epsilon must have the units of inverse time which is not energy. But even though the probability distribution is stationary, we'd expect the phase to depend on energy. In other words, although this thing is staying put, if the energy is high the colors are really flashing like crazy and if the energy is low, the colors are hardly moving at all. Very slow throbbing. And taking a q from the presence of h bar and the momentum eigenfunctions, we could guess the following. That capital psi at time t is capital psi at time zero times e to the minus iet over h bar. And that would be a very good guess as it turns out because if we put this guess into the Schrodinger equation, we can make a connection then with a time derivative. So now we take e psi which was the time independent part which was just the energy. And we say well if the wave function is time dependent, we'll write that as e psi at zero times e to the minus iet over h bar. But if that's true, then the other way of writing that to get rid of the e is to write ih bar just like we did with momentum, only it was minus ih bar, ih bar time derivative of psi. Because when I take the time derivative, out comes minus iet over h bar without the t of course, just the constant and so the ih bar and minus i over h bar cancel and I get e, the energy. And that's exactly what I want to have. So using this on the right hand side of the equation, now we have a proper time dependent equation which is very much like f equals ma. The acceleration is the second derivative with respect to time of position. And what we have now is we have the kinetic energy of the wave function minus h bar squared over 2m times the second derivative of psi plus the potential energy which is just the potential energy times psi. And that should equal e psi but if it's time dependent, then it becomes ih bar times the time derivative of psi. And this is in fact the one dimensional time dependent Schrodinger equation which is the basis for all kinds of time dependent calculations that people carry out. The wave function in this case depends on both x and time and so to be mathematically rigorous, we have to use the funny d's. Writing capital D is not proper because t could depend on x and then I'd have to figure out what dT dx is and dx dt and so forth and that's not what I mean in this. We're taking the derivative with respect to space only on one side and on the other side we're taking the derivative with respect to time only and therefore we have to use the funny d's and that means that this becomes a partial differential equation which can be very, very, very difficult to solve. These guys are bears and you take a course in PDEs and you learn how to solve them. When there are three spatial dimensions rather than just one, then the kinetic energy adds up separately px squared, py squared, pz squared, no big deal. And the potential then becomes a function of x, y, and z which we usually write as just r, some vector which tells you where you are. And then we get a fearsome looking equation because we get minus h bar squared over 2m, then this upside down triangle, the del squared operator on the wave function plus the potential and then we have the time derivative of psi on the other side as the same thing because there's still only one time dimension. The del operator, this triangle thing is just a shorthand because we get so sick of writing d by dx squared, d by dy squared and then you get carpal tunnel and you give up. So we just write this triangle with three sides, it's the three derivatives. And the second derivative, del squared is just del dot del. So del with an arrow over it or bold face del is a vector. It takes the derivative with respect to a certain position. So for example, if you're on a mountain, it might be that if you walk this way, that the slope is zero, it's the path or it's very slight. And if you go this way, the slope is very fast and you fall off a cliff. And likewise, the derivative when you have a multi-dimensional function can depend on which way you're looking like x or y. And so you have to do that. When you square an operator, all you do is multiply with it again and that means you operate with it again. Just as if you multiply by 2 twice, you've operated by 2 squared or 4. Now suppose we have a free particle. That means free particle, there's no potential energy. It's all kinetic. And for example, a neutron in a nuclear reactor which is uncharged and has no electric forces, it's a neutral particle, may be considered until it hits something like a moderator to be a free particle. It's very tiny. It's going through, it's mostly vacuum. Remember, atoms are mostly empty space. If you don't have any electric charges, you don't notice anything and you go right through, which is one reason why it's kind of tough to shield neutrons in some cases. In this case, I've put in explicitly here, the kinetic energy is the same, h bar squared over 2m. The potential energy is zero, so I put a big zero and that's equal to e psi. And given e, the kinetic energy of the neutron, we want to find out the wave function psi of the neutron. This is a second order differential equation and may not be easy to solve if you haven't seen them before. But we know what the kinetic energy of the neutron should be. It should be p squared over 2m, as long as it's not relativistic. And we know that the derivative of an exponential function, any number of times, gives back an exponential function. And we know when we operate twice with the derivative, we get the wave function back times a number. So we try a guess. A solution like psi of x is e to the ipx upon h bar. And if we substitute our solution for the solution of the differential equation in, then what we find is that it appears to work. So I've just worked it out here for you. The first derivative of that trial wave function gives a factor of ip over h bar times the same wave function back. The second derivative gives it twice. And so we get minus p squared over h bar squared. And of course, the kinetic energy operator, excuse me, has h bar squared over 2m. So if we have p squared over h bar squared, then we get p squared over 2m. So that happens to work. And so we find the solution that works. It's p squared over 2m is the energy. The wave function is this e to the ipx over h bar. And of course, we only expect the particle to have kinetic energy, and that's what we found. The total energy is p squared over 2m. What we didn't find is that there could have been a minus i in the exponent. So I picked e to the plus ipx over h bar. But it turns out that I could pick e to the minus ipx over h bar. Well, that's just the thing moving the other direction. The weird thing though is that the general solution is some part a, let's say, times e to the ipx over h bar plus another part b times e to the minus ipx over h bar. And if I put those in, it works. And one part's a particle moving to the right with momentum p. The other's moving to the left with linear momentum p. And it's perfectly acceptable for the wave function to be moving both ways at once. And you might say, well, surely the particle can't possibly be moving both ways at once. What does that mean? And the answer is it means we're doing quantum mechanics because the particle actually can. Until you measure it, it's up to itself what it wants to decide to do. And so the same way that the electron can be slipping through both slits, the particle can be moving both ways. And in fact, that's one of the interesting things about these kinds of systems. In the next lecture then, what I want to talk about is some one-dimensional model problems with certain well-defined potentials. And use these model problems to show you with these equations that we've built up exactly why it is that atoms and molecules and various other systems, quantum dots, have quantized energy levels with discrete energies. So we'll leave it there and pick it up in lecture five.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:00:20 Localized Wavfunctions 0:11:30 Fourier Series 0:13:21 Quantum cryptography 0:28:32 Time Evolution 0:47:22 A Free Particle
10.5446/18881 (DOI)
Welcome back to lecture 3, Chem 131A. We're going to continue where we left off. Today it's more postulate, superposition, operators, and measurement. Where we last left our hero, we had decided that the derivative operator is linear, but it was not Hermitian. And then I introduced this very ornate relationship to describe what I meant by Hermitian. And you might wonder what it means. But what it means is basically, suppose we had complex numbers and most numbers were complex. And then we wanted to say that a number was real, but we only had complex numbers. Well, one trick we could use is we could say, if z is equal to z star, then the number is real. Because the only imaginary part that can be equal to opposite itself is zero. And zero imaginary part means the number is real. And so really, Hermitian is just making sure that when we measure something, we get a real number. We still do believe that probability doesn't have an imaginary part and neither does energy. When we measure it, it has units of joules and so forth. And so we want to make sure that these things that we measure are Hermitian. And this formula with these integrals and stars and the operator in is just a very fancy way of saying z is equal to z star. Nothing more than that. Okay, let's show that the derivative operator then is not Hermitian. Well, here's what we have to do. We have to do an integral of f star d by dx g. And we have to show that that is or is not equal to the integral of g star d by dx f whole thing star. When you see an integrand that has a derivative in it, the first thing you think is I bet I can integrate that by parts. If you recall, integrating by parts is basically doing the opposite as doing the derivative of uv is udv plus vdu. We turn that around and we move the uv, one of them, to the other side and then we set the integral equal to that. Now the limits on this integration are plus and minus infinity but I won't always put them in because it may get a little bit messy. But whatever it is, the wave functions have to vanish at plus or minus infinity. And the argument as to why they have to vanish is if they had any amplitude out there, way out there, then we couldn't normalize them. They would get too big. And so the only way we can have the area under the curve crank down to some number is that it finally dies out when we get far enough away. So let's try integration by parts. The formula is the integral of u d by dx of v of x is equal to uv minus the integral the other way around v d by dx u. And this is going to be convenient because the Hermitian thing had them the other way around. And now let's let u, the function u of x conventionally and calculus, let's let that be f star. And the v, let's let that be g. And let's try it. So our equation becomes this, fairly intimidating looking but not too bad. The integral of f star d by dx g is equal to f star g evaluated at plus or minus infinity minus the integral of g d by dx f star. And that is equal to minus the integral of g star d by dx f whole thing star. But that's not equal to what we want. And so what we have, because we have a minus sign and we want it to be a plus sign. So it's not equal and therefore the derivative operator is not Hermitian. It's called anti-symmetric for obvious reasons. When you swap them it changes sign. But it's not Hermitian. How can we have it be Hermitian? Well interestingly enough we have to use our friend the square root of negative one again. And if we multiply the derivative operator by minus i h bar that's enough to do the job because minus i star is plus i. And so therefore that gets rid of the minus sign that we got stuck with with the regular derivative. And then we just follow everything else through. And it works. You say well what, why is the h bar there? And the answer is this is quantum mechanics. Of course there's an h bar there because we're going to have to have that in almost everything we use. And in fact the momentum operator p hat x which tells us the momentum when it operates on a wave function it tells us the momentum in the x direction is just given by minus i h bar d by dx and it is a linear Hermitian operator. Its eigenfunctions are very closely related to those of the derivative operator because after all, all it has is just an extra thing out in front. But we want to make sure that the eigenvalues are real and so these eigenfunctions are exponentials but we put in an i and here we realize that p is real, x is real, h bar is real and so this is e to the i px upon h bar. We know that we have to have the units go away if we take an exponential because an exponential is a power series, 1 plus x plus x squared over 2. And if it has units we're adding feet and feet squared and feet cubed and that doesn't make any sense. And so with a little bit of dimensional analysis we come to the idea that these functions here e to the i px upon h bar are very good candidates. So let's do another practice problem and have a look. So let's show that these are the eigenfunctions in fact of the derivative operator. Well, let's take p hat x on our function phi of x. Let's put in what p hat is minus i h bar d by dx on the function. Let's put in the function which we assume is e to the plus i px upon h bar and the derivative of anything times x is the thing, excuse me, anything times e to the a x is a e to the a x so we bring down the i p upon h bar and now I think you can see why we want h bar out in front. The h bars fold up minus i times plus i is minus i squared but minus i squared is plus 1. That goes away and that leaves us with p and that's p e to the i px upon h bar and that's p times the eigenfunction and therefore we've shown that the operator p hat returns the eigenvalue p which has the units of momentum. So the complex exponential is the eigenfunction of the momentum operator and the eigenvalue is p. In the language of linear algebra, the eigenfunctions of a linear Hermitian operator form a basis. So if I take a point in a two-dimensional plane and I want to figure out where I am, I know that if I go a certain unit out on x along the x axis and then up or down by y that I can get to the point and furthermore any point anywhere can be expressed as a combination of some distance this way and some distance up or down and there's no point that can escape. So if we have a vector, any vector, any point x naught, y naught that's equal to x naught times the coordinate unit along the x axis plus y naught times the coordinate unit along the y axis and just like that we can write any wave function as a linear combination of eigenfunctions of a Hermitian operator. They form a basis. No function can escape and that's important because if some functions could escape that would mean there were certain values that we couldn't measure anything and that would be very bad because what would happen to the probability? Particles would be disappearing then. Opostulate 5 is this. It's quite a mouthful but we'll get to it. When a wave function is not an eigenfunction of the measured observable, the result of the measurement is still an eigenvalue but now the probability is given by the square modulus of the expansion coefficient of the eigenfunctions of the operator. So if I have a wave function psi is some constant, could be complex number, it doesn't matter because all these functions can be complex. So let's just call it c1 phi 1 plus c2 phi 2. Then the probability of obtaining the first eigenvalue is the square of c1 with the absolute value so if there's an imaginary part you take c1 star c and the probability of obtaining the second one is the square of c2. Those are the two probabilities and if there are only two parts making up the wave function, those are the only two values you can get. Usually a wave function is made up of a whole bunch of different eigenfunctions and so there are a lot of different possibilities that you can get. The eigenfunctions themselves have to be normalized. That means if you happen to be in an eigenfunction, your chance of being somewhere in the universe and having that eigenvalue, let's say, of momentum is equal to 1. And so the basis functions themselves are normalized and we always assume that they are normalized without comment. And likewise for the wave function to be normalized once the basis functions are normalized, that means that the probabilities in those coefficients have to add up. So the sum of the squares of all the coefficients always have to add up to 1. It's as if we have a unit circle and we're some point on the circle and we have an x component and a y component and the Pythagorean theorem says x squared plus y squared is equal to 1 and that's how it works and it works the same way in higher dimensions. Second comment, the best way to think of these eigenfunctions is not as things spraying around in space. Think of them as vectors. Think of one eigenfunction pointing this way, telling you the amount on this side. The other one points this way. A third one points up. If I've got more I have to have an imagination. But basically they're all at right angles to each other and they're all telling the amount of this special state that is in there to begin with. And eigenfunctions of a linear Hermitian operator corresponding to different eigenvalues are orthogonal and that's another reason why it's good to think of them like vectors because if I have an eigenfunction here and this has one eigenvalue and I have another eigenfunction here and it has a different eigenvalue, then those two functions have nothing to do with each other. They are as different as different can be. They're in different directions. They have no influence on each other. And to see this normally suppose we have x and y, we can tell they're at right angles because we can look. But suppose I put my arms out some way and then I say, well are those orthogonal? Well you could try to mentally rotate and see if it comes back to x and y but that's a very, very slow and labor intensive way to do it. Instead what you do is you take the dot product. You take the product of the first two components, the second two, the third two, you add them all up and you see if that's zero. And if it's zero that means they're orthogonal. If it's not zero that means that they aren't orthogonal. So for three real components, let's say two vectors in 3D space, I just take ax, bx plus ay, by plus az, bz. And if that sum that comes to zero, it doesn't matter what the individual terms are. If that sum comes to zero that means that the vector a and the vector b are at right angles to each other. And that's much, much easier to compute. More generally if we've got lots of dimensions, then we need to expand our sum. So it could be a1, b1 because we don't want to have x, y, and z if we've got let's say 5 or 6 or 20. We run out a letter so we switch to numbers where we won't run out. A1, b1 plus a2, b2 plus a3, b3 and we just write that in a shorthand as the sum over n of an, bn. And that goes to however far we want it to go, including in some cases to infinity. And that should be zero. And the same idea holds as functions except one, the sum becomes an integral. Because when you multiply the functions by each other, they both depend on x. And so adding up, you can't just add up, you have to integrate to get the answer. And number two, because the functions can be complex, we have to take the complex conjugate of the first function. Let's suppose we've got two functions f and g. Then our orthogonality condition is as follows. The integral of f star times g dx is zero. Now we can show based on this and the definition of Hermitian that the eigenfunctions with different eigenvalues are orthogonal. So here's what we do. If it's an eigenfunction, it has an eigenvalue that's a real number. So let's put omega on phi one. And we get omega on phi one. We get phi one back because it's an eigenfunction. We put omega on phi two, we get omega two. And the only thing we need to know is that omega one is not equal to omega two. There are different numbers. They're real and they're unequal. And the operator, big omega hat, is Hermitian. Let's take the first eigenvalue equation and make a series of operations to both sides of it. That's always what you do when you simplify equations. You do the same thing to both sides methodically. And if you do that, you never get mixed up and nothing ever goes wrong. And if you do some shorthand of cross multiplying this and that and you don't know what you're doing exactly, you'll oftentimes get it wrong. So let's take this equation, omega phi one equals omega one, phi one, and let's first multiply on the left. We have to make sure we multiply on the same side when we do this by phi two star. Okay? So now we've got phi two star omega phi one is equal to phi two star little omega one, phi one. And then since omega one's a constant, I can pull it out and say that's little omega one, phi two star, phi one. Now I'm going to put an integral on both sides because if two things are equal, then if I multiply them both by phi two star, they're still equal. And if I integrate them both over dx, they're still equal. They don't become unequal. And so I integrate over phi two star x omega phi one dx. And that's the integral of omega one. And since omega one is a constant, I pull it out and I end up with omega one times the integral of phi two star, phi one dx. And we can do the same series of operations exactly. But instead of having omega hat phi one, we take omega hat phi two and we get omega two. And we just go through the same. Only we just swap the rolls of one and two. We multiply by phi one star. And if we do that, I've just not done every step here, but omega hat phi two is equal to omega two times the integral of phi one star, phi two dx. Now let's take the complex conjugate of both sides of the first equation. So on the left hand side, I have the complex conjugate of the whole thing, omega two, omega hat, omega one integrated. And on the other side, I have little omega one times the integral of phi two star phi one dx star. And I can simplify that. I leave the other side alone because that's going to be the definition of Hermitian. The right hand side, I turn to omega one star. And then the integral of phi two star star times phi one star. Well, the star of the star, let's see, I change i to minus i back to i, so that goes away. And I can then write the phi one in front of the phi two. It doesn't matter. I'm multiplying those. There's no operator. So I finally come to the following. The integral of phi two star omega hat phi one is equal to omega one times the integral of phi one star phi two. What does that get us? Well, the observable is Hermitian. And so when I put in omega hat phi two x to give the eigenvalue omega two and I do the same thing, I find that I get the same thing backwards. So omega two star omega one phi one is equal to omega one star, sorry, phi one star omega hat phi two. And so using our two series of equations, here's what we come to finally. Omega two times the integral of phi one star phi two is equal to omega one times the integral of phi one star phi two. But omega two is not equal to omega one. So let's subtract omega one times the integral from both sides. Then we find out that omega two minus omega one times the integral is equal to zero. But since omega two is not equal to omega one, it must be that the other thing zero. Because if I have any number times something, the only way I can make the whole thing zero is if the other thing zero. And that means that the integral of phi one star phi two dx is zero. And that means that they are orthogonal. So that's the proof. You'll have to go over it a couple of times to get it down. But that's kind of a standard thing that's done in quantum mechanics to show that eigenfunctions for different values of eigenvalues are in fact orthogonal. Now suppose we make a measurement on a quantum system. And it's represented by a wave function psi that's not an eigenstate of the operator in question. Then what happens? Well, we express psi as a linear combination of eigenfunctions. And that we know we can do that because there is no function that can escape us. And we know our eigenfunctions can be made normalized, so we assume they're normalized. And then the probability of obtaining a particular eigenvalue, let's say eigenvalue k out of the totality from one to n is the absolute value of Ck squared, where Ck is the coefficient of the kth eigenfunction. Now suppose we then make the measurement again right away. The question is, do we get a different result? And the answer is kind of surprising, but the answer is no. It turns out if we make the measurement again, we get the same result. And if we keep measuring the same observable over and over, we keep getting the same result. And now sort of mysteriously, in a way, it's 100% certain that we're going to get that result. There is no other result that we're going to get. And I've tried to encapsulate this in this kind of pseudo equation. We start out with probabilities. It could be any of these eigenstates. And then we make a measurement. And somehow one of them is chosen. And we can't say how, even in an ideal experiment, but we can say what the probability is. Let's say 25% of the time we get this result. Now if we measure again and nothing's intervened, we haven't done anything, we get the same result again. And again, and again, and again. And now there's no probability at all. It's always 100%. So it's exact certainty. That is, measurement is kind of like a filter. All the other possibilities are filtered out, leaving the one that's actually observed. If you sort coins with a coins order, you roll them down. And when they fit the size of the tube, they drop in. And if you don't know which tube is going to drop into it first, then that's like being uncertain. But then if you drop the thing in and it drops into the third tube, and then you empty it out of there and put it back in, it's going to drop into the third tube again. And that's kind of an analogy for what's going on here. So if we look at the probability of a coin dropping into the third tube, or we could say, for example, suppose we flip a coin until it hits and stops spinning and lies down, we assume it's 50% probability heads and 50% probability tails. Then we see its heads, then if we don't do anything, we don't flip it again. Just sit there. It's heads. It's heads again. It's heads again and so forth. And it's heads as many times as we want to keep looking at it. And that's what this theory of measurement is saying. In other words, when you make a measurement, you rule out certain possibilities. They're now gone. Now the measurement's made. It came up. It happened to come up this. If you make it again, it comes up this. If you make it again, it comes up this. And that's assuming you don't have any interaction in between. But this is an idealized experiment. We aren't talking about how we practically would implement it. And likewise, in quantum mechanics, it's just like looking at that coin. If we make the same ideal measurement again and again, after filtering out all the other possibilities, we just get the one result that we got, the same result each time. But before we made the measurement, it seemed like there were other possibilities. And if we start all over, not with the one we've measured, but with an identical particle coming through that we haven't measured, then we might get a different answer. And then if we measure that again, we'll get that different answer again, and so on and so forth. And so it seems as if the measurement itself took this very fragile thing, this wave function, and it made it collapse onto a particular eigenfunction. Said, right, this is it. And then all the other possibilities vanished forever. If I decide I'm going to give a lecture, I turn up and do the lecture, but if I decide I'm going to the beach instead and I go to the beach, then the lecture is not a possibility, and it's now vanished forever. It's gone and I'm at the beach. And so by making that choice, I've narrowed down the possibilities. Before I did that, I could say, well, 50-50, I give the lecture, go to the beach. And that's important because it means that measurement affects quantum systems, and that means that there is no such thing as a property without measuring it. We usually think that things have properties independent of measuring them because they seem to. This pointer, for example, has a mass, whether I have it on a scale or not, and I assume it's the same. And for big objects that are always being bombarded by all kinds of things and never have a chance to let the wave function sneak around, that's certainly true. But for small things, we have to be very wary about assuming that something has a property if we have not measured it, because the measurement will change. And so it could be that it was in some superposition of mixture and when we measure, we picked out one of them. But that doesn't mean it was like that before. It means we might have changed it. So if we had obtained, let's say, go back to the coin, if we obtained tails instead of heads on the first throw, then if we keep looking at it, it's tails. And so we get 50% probability, and it collapses onto the other side. It collapses onto a particular choice, half the time, and once it has collapsed onto that particular choice, it remains there for any number of repeated measurements. It does not change. Now, the question is this. What happened to the uncertainty principle? Because now I'm claiming that we can get measured results with certainty. We're saying we always get the same result. We just measure it once, then the uncertainty goes away. And that's kind of interesting because it's not so simple. Because the uncertainty principle, which we quoted for position and momentum, applies to measuring two things, position and momentum at once, or one right after the other, not just one observable. There is no uncertainty about measuring one thing as well as you like. The problem is if you want to measure what you think of as everything that you could measure, then there will be some problems, some blurring perhaps that you didn't anticipate. So a deeper analysis shows us that not all properties need to be uncertain. In fact, if the two operators have the same set of eigenfunctions, this is why it's very important mathematically for us to be able to determine the eigenfunctions of an operator. Because we might have this operator and that operator representing this and that. And if it turns out that the two operators mathematically have the same set of eigenfunctions, even if they have different eigenvalues, usually they will, because they have different units and so on. Then we can measure both of them and we get exact results for both. So we may measure this and that and we get a certain value. And if we measure this and that again, we get the same for both. And there's no uncertainty. But unfortunately, position and momentum, which are two things that people like to determine, are not compatible in that way. Oh, there's this idea called complementarity. Observables that are incompatible cannot be measured to arbitrary precision. So here, what I've shown is a real coin. I flipped it and it happened to come up heads. It's a penny. And you can see that it's heads because you can see Lincoln in profile on the face of the coin. And you can even read other things on it, like the year it's minted. But let's just say we can tell that it's heads for sure. Now, suppose instead of trying to measure heads, and if I leave it there, it's going to obviously measure heads, heads, heads. I'm not going to flip because I'm not allowed to do anything to it, except look at it, measure it. If on the other hand, we're interested in the exact thickness of the coin, in that case, we have to orient it like this. And this was tricky to do, but the coin did balance on its edge. It was thick enough and the surface was flat enough. And my collaborator had a steady enough hand. And now, if you have the coin oriented like this, you can see exactly how thick it is. Whereas when it was down with the head pointing, you had no idea how thick it was. Imagine you're looking straight down on it so you can get the best possible view. Now, you can try, now if the coin's on edge, it's obviously unstable. And so anything I try to do to look at it could have it drop. But the question is, when it's like that, which side is heads? And the answer is because I'm looking at it edge on, I have no idea which side is heads. And quantum systems are very much like that if we have complementary variables. If I try to zero in on one of them, it means the other one fades out. And I can't get both at once because they're interfering with each other. And I just, there's no possibility of doing that. We could try to cheat. Here's the coin balanced on a pen with an eraser to keep it steady. And we could look at the coin on an angle like this. And the way it's angled here, I can pretty much tell its heads. It's not as clear as it was before. But I can pretty much tell its heads. But what I can't do now is measure the thickness very well. Because I'm seeing the thickness from an angle and it gets smaller and smaller and smaller as I get the heads and then I can't measure it very well. And basically in order to get the thickness better, I have to turn the coin toward me like that. And then finally at some point, I can't see whether it's heads or tails. Now with a coin, a macroscopic thing, I can look at it heads, I can orient it and say, well that side's heads. But with small things, you can forget that. That's not possible. Unless you can see its heads, you don't know what it is. And that's the problem. So the uncertainty principle really makes this numerically rigorous. It says exactly how well you could measure the thickness and or tell its heads when it's a small thing and when you know how the different variables, the things you're trying to measure interact with each other. That's basically what it's making much more rigorous. Let's talk now about classical atoms. In a classical atom, we have Maxwell's equations. And this was another problem actually at the turn of the century is that it was fairly easy to work out that an accelerating charge would radiate energy in accordance with Maxwell's equations. But if the electron, which is certainly accelerating, if we imagine it going in a circle around a positively charged nucleus, then it has to radiate energy, then it has to slow down because the energy doesn't come from nowhere. And what that means is that the electron would spiral in toward the proton and it would eventually condense onto it. And if that happened, there wouldn't be any electrons around to make bonds and so there wouldn't be any molecules, there wouldn't be any life, there wouldn't be any atoms even. There would just be like a neutron star or something with just all this condensed matter. Somehow, the electron is not like in a planetary orbit and it's not behaving according to the way a charge would in Maxwell's equations. And the reason it doesn't partly is the uncertainty principle. Because suppose the electron starts slowing down and spiraling in and coming in and in and in smaller and smaller and smaller. Well, we just did a calculation that showed if the electrons within 200 picometers that the minimum uncertainty in velocity is like 100,000 meters per second. And what that means is that it is impossible for the electron to spiral in and be on top of the proton in that itsy bitsy space and be stationary because that violates the uncertainty principle. And quantum mechanics says it's not possible to measure position of momentum that accurately. And therefore, the electron may start spiraling in and then may just get tossed out suddenly and go in a different direction. And so that saves us. So if the electron is not spiraling around like a planetary model of an atom in an orbit, then what is it doing? It certainly maintains a stable probability distribution because we look at atoms and we see that they have a cloud of negative charge around and it doesn't change if we don't disturb the atom if we just leave it alone. But we don't know where the electron is because the electron behaves like a wave. Unless we try to measure its position by which we would have to use a very energetic photon and blow the electron clean out of the atom basically. And then of course, we've lost our thread. We were trying to figure out what it looked like when we didn't look at it. The problem is that's not allowed. You can only talk about the things you can measure. You can't talk about things that you can imagine what they might be. Stable distributions of the electron density have to be standing waves. Standing wave, you can think of a guitar string. If I put my finger, I have a fret here. You can't move here. You can't move at the other end in between. It can vibrate and it just makes a stable pattern and sits there doing the same thing. And the electron then has to do something very much like that when it's in an atom and it has a very interesting wave property. We couldn't understand it at all if we thought of it as a rock or a BB moving around in there. An orbit of course is a periodic trajectory, but electrons don't have trajectories. And so instead of an orbit, we speak of orbit-oles, which is the wave analogy of a stable orbit. Now it's the wave function that has to somehow, like the guitar string, can only play a certain note if I hit that fret. The wave function can only play certain notes in the atom. It has to come around and match itself and give a stable standing wave. And that means the wavelength of the wave function has to match into the space into which the electron is confined. If it doesn't, you won't find the electron in that wave function. So there is destructive interference and the wave function vanishes. And if the wave function vanishes, then the chance of seeing the electron at that energy also vanishes because the wave function tells us the probability. Let's, a 3D thing is kind of hard to visualize, but we can certainly do it on an electron on a 2D sphere, a ring, and that makes it much easier for us to draw. So let's have a look then on an electron on a ring. Here is an electron going around as a wave. I say going around, but I don't know where it is because I haven't measured its position. But I have a standing wave. It's equal everywhere in space. I'm just showing the real part. When the real part's zero, the imaginary part is big. And that's why, another reason why we have to have complex waves sometimes. This one goes round and round and round. And every time it comes around, it's back in the same place. Goes around, comes back in the same place. Round and round. And so therefore, it's going to make a stable repeating pattern. It's going to sit there. And in fact, we can't see anything going round and round. I imagined it was doing that as if it were a little rock going around and around. But in fact, all it is is just this pattern. Stable repeating pattern. Because it exactly matches the condition that it met, meet itself when it hooks up again. But if I have a slightly different wavelength so that it doesn't match when it comes around, but it's a little bit off, it goes around, instead of matching perfect, it's a little higher, then if it goes around again, it's a little worse. If it goes around again, it's a little worse. Waves go up and down. Finally, it's coming around as the opposite sign. And let's go around then another time on this thing. Second time around, it's a worse mismatch. And if I go around several times, it appears when I draw the thing that the wave is up and down and up and down and up and down and up and down everywhere. And that means that it cancels itself out. There is no wave there if it's up and down and up and down. It's just canceled itself out. The only position that can have a wave is the one that perfectly matches. Now there could be a higher one, instead of three lobes, it has four. It goes around four. So that perfectly matches too. But there's not three and a half and there's not 3.1 and there's not pi lobes. There's exactly an integer number of lobes and that means there are a certain number of integer energy levels that the electron can be in and it can't just be anywhere. That's not allowed. And that's very important because it explains the spectroscopic observation of atoms where they didn't just irradiate any old light, but every element gave certain characteristic lines which depended where it was in the periodic table and so forth and so on. And of course it's very important for chemical analysis. That's one way you can tell what's in an unknown sample as you do atomic emission spectroscopy. Confined systems, atoms are confined systems because the electron has to stay there. But there are many other confined systems, nanoparticles, the so-called particle in a box which is a model problem we're going to do which has very easy mathematics compared to real problems which is why we do it. Wherever you have a confined system, it doesn't matter how it's confined. The wave function has to somehow fit. It has to fit into the space available. If it goes round or bounces back and forth or does anything and it comes back different, that means that particular wave function is going to cancel itself out and it's just gone. This matching condition really restricts the wave function to a certain set of values and gives us allowed energy states such as those that are observed in atoms and molecules and form the basis of all kinds of chemical analysis that we're going to do. If we've got much shorter wavelength light, we learned that short wavelength equals high energy and that long wavelength light equals low energy. And De Bruyley said, well, particles have a wave associated with them and now we've given this thing a name, the wave function. And it's a function and we can plot it if we have a functional form for it, which we sometimes do, and we can look at it. And what we find is that if the wave function has higher curvature, it's going up and down, up and down a lot like crazy. That's sort of like a photon with a short wavelength and that means that that state, that quantum state is higher energy than one that's all spread out and just kind of moping around and not really very many up and down parts, up and down nodes in the things. I'm going to close here with the position operator and we'll pick this up next time. We found that the momentum operator was to take minus i h bar times the derivative. The position operator is, in fact, I gave it before as an example of an operator. X hat on psi is just equal to the number x on psi. And the number x is the position of the particle. The momentum eigenfunction is e to the i p x upon h bar. And to make sure that the eigenfunction is normalized, we should include a normalization factor, which I'll just put n, some number here, to make sure it's normalized. And the question is, what does the probability density look like for a momentum eigenstate? Well, we just take phi of p times, phi star of p times phi of p. We get n e to the minus i p x, n e to the plus i p x, e to the plus, e to the minus, that's 1, because that's e to the 0. It doesn't matter whether it's imaginary or real, that still works. We just get n squared. But that's weird because it says the probability density doesn't depend on x. It's just some number. And what that means then is for a momentum eigenstate, the particle has equal probability of being anywhere at all, basically from plus or minus infinity, equally likely. And so position eigenstates, where the particle is definitely within 10 to the minus, no matter how small you want to think, at that point, a momentum eigenstates, where the momentum is exactly determined, are two completely different aspects of measurement. And they're completely at odds. The best you can do is you can get the position to within a certain limit, and then simultaneously you can get the momentum to within a certain limit. But if you try to get too aggressive with one, you just like squeeze this in, then the other one gets wide, because somehow this area between the two of them is like pushing on jello or something, it squeezes out if you try to get too aggressive with it. And there's no way you can minimize that effect except to have the very best uncertainty on the inequality, but you can't make it zero the way you'd like to. To have a position eigenstate, on the other hand, instead of this corkscrew, you should think of e to the i p x as a corkscrew. It is corking this way, the particle is going that way. It is corking the other way, it's going this way. But in either case, it's just a constant thing corking around like a corkscrew driving through a wine bottle cork. It's going to go a certain direction. On the other hand, position shouldn't be like that at all, but it should be like is it should be, the wave function should be piled up like a big pile of sand in this position. And then it should be zero elsewhere because we know when we take the wave function and square it, that tells us the probability of finding the particle. So if we take some function and pile it up like the Eiffel Tower real steep, then the particle is going to be there. And then it might have some uncertainty, it might be out here. But we could imagine piling it up very steep and very high, and that would be a position eigenfunction. Next time what I'm going to do is I'm going to take a model position function that's localized, and I'm going to expand it in terms of momentum functions and show you that the momentum of such a function like that becomes more and more uncertain as we make the position sharper. And then finally, we'll finally introduce, after the first week of class is over, we're going to introduce the wave equation, that's what we didn't have so far, that tells us exactly how these wave functions move forward in time and how they have certain energy and other properties. And that allows us then to discover what these wave functions actually are. So we'll pick it up there next time.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:00:19 The Postulates of QM 0:05:30 The Momentum Operator 0:08:17 Basic Functions 0:13:44 Orthogonality 0:28:50 Uncertainty 0:30:51 Complementarity 0:35:00 Classical Atoms 0:39:12 Wavefunctions and Orbitals 0:43:37 Confined Systems 0:45:49 The Position Operator
10.5446/18880 (DOI)
Welcome back to Chemistry 131A. Today we're going to talk about particles, waves, the uncertainty principle, and some of the postulates of quantum mechanics. As you recall from the last lecture, we found out at the turn of the last century, by which I mean around 1900, that in fact particles were behaving strangely and classical physics was not accounting for all the observations. And so a new theory was put forward, which came forward over a period of time called quantum mechanics. And like any fundamental theory, it has to be compared with experiment. And so there were experiments that were done. We saw some more modern experiments last time in which an electron and an electron microscope behaved very much like a wave, the experiment of Tanimura Hitachi. And in fact this wave behavior had been observed and caused Lewis de Broglie. I believe it's actually pronounced de Broglie, but just to keep it clear, we'll say de Broglie anyway. He proposed in 1924 in fact that all particles have an associated wavelength that is related to their momentum and that this wavelength just follows the same relationship as that for a photon. We saw last time that the photon momentum was given by H over lambda. And de Broglie proposed that in fact there was a wavelength then lambda that was related to H upon P for a particle, not just for a photon. And in fact in 1927, three years later, Davidson and Germer showed that an electron beam fired at a nickel crystal showed a diffraction pattern. And that's a wave phenomenon. And furthermore, they looked at what the wavelength of these electrons in the beam would have to be. And the wavelength was very, very close to the exact prediction that de Broglie made. So the question is where is the particle? In classical mechanics, the center of mass of a particle has in principle at least an exact location at all times. In quantum mechanics, it's not quite so clear. In classical mechanics, the particle follows a trajectory. What a trajectory is, is it's an exact specification of the position of the particle and the momentum of the particle or its velocity if the mass doesn't change at all times. And that's in fact how we do all kinds of calculations in classical mechanics, whether I'm going to shoot a shell and have it land somewhere or anything along those lines. And that works extremely well for large objects. But it fails for small objects because they show this strange wave behavior. Now the problem with something that's demonstrating a wave-like behavior is that we can't say for certain where a wave is because waves tend to spread out over time. And so we're kind of caught in a little bit of a difficulty because if we can't actually say where the center of mass of the particle is, if it appears to be blurry, we can't specify it exactly. That means that we can't have a trajectory. And the trajectory following Newton's laws is exactly how you calculate where things are going to end up. And so now you've got to have a new method to calculate how things are going to behave if they're showing this wave-like phenomenon. And so that was a big chore actually and took a lot of very smart people a very long time to work out. In 1927, Werner Heisenberg made this blurriness more formal in the famous uncertainty principle which states that no matter how you design an experiment, it is impossible to measure the momentum and position of a particle with arbitrary accuracy. There is a certain minimum amount of uncertainty that's left over given in this famous expression delta p, delta x is greater than or equal to h bar over 2. It's important to emphasize that this has nothing to do with your experimental apparatus having some sort of deficiency. This is just an idealized experiment done as best as you could possibly do in this universe and you still cannot simultaneously specify the two things at once. In fact, the quantity h over 2 pi occurs so often that we invented a shorthand notation called h bar, h with a slash or h cross was created for it. And you'll see that often in the formulas that we use because we get tired of writing 2 pi so much. The uncertainty principle says that we just cannot simultaneously determine position and momentum to arbitrary accuracy no matter what we do, even in an idealized experiment. And that runs counter to our everyday experience where we seem to be able to watch a rolling marble and plot its mass and figure out where it is basically about as well as we want to. And so there must be something in this thing that makes it different for small particles. And the something is the exact size of h bar. If h bar were bigger, we would notice all these things happening with big objects but h bar is so small that we don't notice it the same way. We don't really notice the momentum of a photon, otherwise the lighting that is on me now would be pushing me around and I'd have to fight like a mime to keep my position. Now the rationale is when we think of looking at something, we're looking at something with the lights on. We can't look at something in the dead pitch black and see anything. But if you've got a small particle, then we've learned that light consists of packets, photons, and these have momentum and energy. And so when we try to see where a very small particle is, we can't just look at it. We have to bounce something off it. We have to use something like light. And the light itself is going to change the momentum of the particle. And if we want to get the position to be very close, then we have to use a short wavelength of light. But that's a high frequency and we learned that the quantum energy of light is h nu. And so if nu is high, that means that we're going to come in with a ton of energy like an x-ray. And then that's going to boot the particle around. And so although we could say it just was there, now we can't say very well what its momentum is. And these ricochets happen all the time. And so we have a fundamental problem if we try to do it. Now if we turn off all the lights, then we know that the particle is moving with a certain speed. But then of course we can't tell at all where it is. So for small objects, the photon kicks the particle and that creates the fundamental problem. So if we don't really know how big an electron is, it appears to be a point if you do experiments with it. But to tell where it is, we need then a wavelength of light that's small. So just the same way if you put on infrared goggles and you look at night, you can see heat. But everything's much blurrier because it's not as sharp as visible light because it has a longer wavelength. So we need a small wavelength of light to get a particle down to what we would consider to be a reasonable precision of measurement. And that gives a big kick to something like the electron in anoxy. And so the electron momentum in that case becomes very uncertain. Conversely, you can think of how you might design an experiment that would measure the momentum of a particle. One way to do it would be to have a very, very long, thin tube and have particles coming in. And if they aren't going straight along the tube, then they get hit the wall and they're out of there. And you can have some choppers like fan blades at certain distances and moving with certain rotational speed. And if the particle happens to be going the right speed so it goes through the hole of this fan and then the hole of that fan and so on, then you know for sure that it has a momentum within a certain range. So you've isolated it very well. But if you really want to get it to be very, very small uncertainty, that means you're going to have to have a very, very, very long tube. And then the particle could be anywhere inside the tube. So you don't really know its position. And of course, if you open it up so it's not dark and you put light in, then that follows up the momentum as we said before. So we can't specify the position of any kind of object without light to see it or something else which would be even worse probably. Now for macroscopic objects, the uncertainty principle does not limit us. We're always limited experimentally for anything we do that's of size we can see. It has nothing to do with that. But for very small objects like a single electron, it becomes the major factor. So here's a practice problem that's meant to illustrate this difference. So practice problem three, suppose we take a one gram marble and we know its position to a tenth of a millimeter which is pretty good. And we know the position of an electron to 200 picometers which is 200 times 10 to the minus 12 meters. The question is what would be the minimum uncertainty according to the uncertainty principle for the velocity in each of these cases? So first of all, the uncertainty and momentum is in the velocity not in the mass. The mass stays fixed so delta P becomes M delta X. And then we use the uncertainty principle. Delta P delta X is greater than or equal to H over 4 pi. We have to be a little bit careful with the units. We were given the units in grams but we have to convert to kilograms and we were given the uncertainty in millimeters but we have to convert to meters because we're using MKS units. And if we take account of all those factors, then we find that delta V is H which is 6.62 times 10 to the minus 34 over 4 pi. And then we have 10 to the minus 3 kilograms, that's our 1 gram and 10 to the minus 4 meters. And if we work out the units carefully, we find that the uncertainty in velocity is 5.2 times 10 to the minus 28 meters per second. For reference, an atom is about 10 to the minus 10 meters and so the uncertainty in velocity is way, way, way, way, way, way smaller than anything we could possibly ever notice. So it's just as good as almost infinite precision as far as we're concerned. For the electron, however, if we do the same calculation and here the difference is we put in 9.1 times 10 to the minus 31 kilograms and then we put in 200 times 10 to the minus 12 meters, then what we find is that the uncertainty in velocity is about 2.9 times 10 to the plus 5 meters per second. So that's 200,000 meters per second. If you've ever run a 10K, you realize that that's running pretty fast. And so trying to localize the electron down to just 200 picometers means that its velocity becomes very, very uncertain. So the uncertainty in velocity is very, very large in that case. And the reason why there's a large uncertainty for the case of the electron is that the electron has such small mass, whereas macroscopic objects in the 1 gram range there's no problem. Okay. Now, we, De Bruyley said, look, matter has a wave associated with it, which is kind of a semantic dodge. It's not as if matter has suddenly become a wave. We still can detect particles in the Tawna-Mura experiment when they hit the screen, they give a dot. And we know how to characterize a wave. There's a phase, frequency, there's an amplitude. And in fact, wave equations were known from electromagnetic radiation, Maxwell's equations, which seemed to indicate the light was a wave and explained many, many of its properties was a wave equation. And so physicists knew how to write those things down. But there's kind of a question here as to whether this associated wave is a real thing or not a real thing. Is it a calculational device or is it a real thing? Well, Davidson and Germer seemed to indicate that it is a real thing, that this De Bruyley wavelength for small particles, this can be the main thing, depending on what kind of measurement you're making. So the question is, is the De Bruyley wavelength an actual measurable thing and if it is, then what equation do we write for the wave? And finally, if the behavior is wave-like, then how do we explain the sharp spots in the Tawna-Mura experiment? When we wanted the electron to sift through both of the slits, it was convenient then to think of it like a wave because a wave can do that and a particle cannot. But when it hits the detector, it seems like it collapses like a poorly rigged camping tent onto just a single point on the detector and it certainly doesn't light up the detector like a wavy thing. We see these sharp spots and only after we measure a lot of these measurements do we in fact see that the aggregate behavior is wave-like. Each individual one doesn't look wave-like. Though this took a lot of thought to sort out for sure. We don't notice any wave-like behavior with macroscopic objects. We don't notice any, when we move around, we don't notice that there are waves coming off my hands or that I can't tell where my fingers are or anything like that. And the reason why is the same reason as with the marble. The small value of Planck's constant is the explanation. We're just so much bigger than H bar that we don't notice that. And for an illustration, let's do another practice problem, practice problem 4. Now let's figure out the de Broglie wavelength of first, a 5 milligram grain of sand moving at 0.1 centimeters per second blowing along the beach. And 2, an electron moving at 1 kilometer per second. Well, for the grain of sand, we just plug into the de Broglie wavelength formula. Lambda is H upon P. And then the rest of it is what a lot of chemistry is. It's units conversion, keeping track of the units, crossing them all out and making darn sure at the end that the units are what you intend to be. So in this case, I explicitly wrote out a joule is a kilogram meter squared per second squared. Cross out the joules, cross out the kilograms, make sure the milligrams is turned into kilograms, make sure the centimeter is turned into meters. And I find that the de Broglie wavelength for the grain of sand is about 1.3 times 10 to the minus 25 meters. That means the associated wavelength is much, much, much, much smaller than any nominal size we would think of a grain of sand, which is certainly much bigger than an atom. However, for the case of the electron, the de Broglie wavelength working out the same math, but using the electron mass and using the velocity given, now converting kilometers to meters, it works out to 7.3 times 10 to the minus 7 meters. An atom is 10 to the minus 10 meters. And so now we're talking about a wavelength of the electron moving at this speed that's much, much bigger than anything we would think of as the size of the electron because even if we don't know exactly what the size of the electron is, we know that atoms have electrons in them and therefore the size of the electron has to be much smaller than the size of the atom, where they'd be like beach balls and they wouldn't fit. And so in this case, when we have this situation, when the wavelength that we figure out from the de Broglie formula is in fact much bigger than any nominal size of whatever it is we're considering, at that point we have to say watch out because whatever this thing is, it could surely show quantum behavior. When the de Broglie wavelength is very much smaller than the size of the thing, our idea of the size of the thing, then we aren't going to see any kind of wave behavior and we might as well just cut to the chase and use classical mechanics to figure out what's going to happen. So that's summarized in this next bullet point. We don't notice any wave behavior if the particle is, has a very small de Broglie wavelength and if it's larger than we surely do notice it, at least in some experiments we will. Now, what are we going to use to actually figure out what something's doing for particles? We had Newton's laws. We had ways of figuring out F equals MA and so forth. What was going to happen? Now we have this de Broglie wavelength, but we still don't have a wave and we don't really want to call the particle a wave because that's a little bit confusing, it makes you wonder what a particle ever was. And so what we do is we have kind of a little bit of a dodge. We speak of the wave function of a particle and the wave function is associated with it and it's given the symbol psi to describe its behavior. And whenever chemists want to keep out people, they switch to Greek letters because if you switch to Greek letters then it seems much, much more intimidating and you have job security. But the symbol psi is universally interpreted as a probability amplitude and I'll get to that in a minute. And the absolute square, if we square it, we get a probability density and there's a reason why we're using probability and not certainty and that part of that has to do with uncertainty and part of it has to do with the very nature of measurement as we'll see in the end of this lecture possibly in the beginning of the next lecture. We can only know quantum mechanics as the probability, not the certainty even as frustrating as that might seem, of finding the particle somewhere. Due to its wave nature it appears to be spread out and when we measure it we interfere with it somehow and that causes the measurement to collapse like the tent I mentioned. But the probability is uncertain until we finally make the measurement. It's as if I take a die and I throw it and I'm not looking at it and then it lands and it bounces and it rolls around and so forth and then I look at it and it's a 2 and I said well the probability beforehand of getting a 2 was 1 sixth but in fact now that I see it's 2 the probability is 100 percent and we're going to see that phenomenon when we make a measurement. Once we know then all the other possibilities seem to have just vanished somehow which is a little bit mysterious but that appears to be what happens and that's a common interpretation anyway of the theory. Once we measure the position we can yield different values and even though we have exactly the same electrons coming through the two slits and everything is exactly the same each time and we believe all electrons are identical. We still get a distribution of different values. We don't necessarily get one dot. In fact we saw for sure we didn't get one dot. We got dots all over the place and when we added them up it was much like looking at a billboard. When you're too close it's just dots. When you get back you see the big picture if you have a lot of dots and then you see that it looks like a wave. And keep in mind that this is in a hypothetical perfect experiment. It's of course not possible to do a perfect experiment but it's possible to be to get very close to a perfect experiment with some kinds of set ups if you're careful and even then you get a distribution of results which is just far, far greater than any uncertainty in whatever you set up and so that looks like there's something else at play. And you might think well maybe there's noise. Maybe there's some kind of thing that we don't perceive and it's there like road noise. You only notice its absence when you go camping but you don't tend to notice it when it's there even though it's always there. So maybe everywhere when we're shooting these electrons and photons there's some kind of noise or something else around and if we were smarter and we figured out what that source was, what was jiggering things around then we might be able to get rid of it and then we might not have this theory of quantum mechanics. We would have a theory where we could measure things the way we want to and so forth and so on. And in fact it seems like that's not the case that quantum mechanics really says that this probability is the nature of nature and not the result of an incomplete theory by leaving out something hidden that we just couldn't figure out. One of the biggest luminaries so far in physics didn't like this theory. Einstein famously said God does not play dice with the universe. Keep in mind though that very bright people can be wrong about things. Linus Pauling who was one of the brightest guys so far in chemistry had a theory that vitamin C in huge doses would prevent prostate cancer and that was disproven even though he believed it ardently. And in this case apparently even Einstein was incorrect. In fact it seems like there's nothing but probability. There is nothing else. For example radioactive decay doesn't depend on pressure, temperature or anything else that's been explored to any appreciable degree. That's why we can use it to date things and figure out how old something is and so forth and so on. And we believe that all the nuclei are identical. They all have the same number of protons and neutrons and there they are in the sample. And yet all we can know is the half life which is the time for example for half the sample atomic nuclei to decay away and to some other element. And for carbon 14 the half life is about 5,800 years which makes it convenient for lots of measurements of things that have been around since humans have been around but not too useful for something that might be extremely old because then there's no carbon 14 left and so when you try to count it you just see nothing and you can say it has to be older than this but you can't really narrow it down too much. But if we look in our identical sample and we watch them there is no way we can tell which particular nucleus out of all the identical set is going to decay. We can't beforehand make any kind of experiment that's going to tell us that. All we can say is that in 5,800 years they're going to be half as many as there were today. And so this means that really probability is the whole thing when it comes to something like this. So there are some postulates of quantum mechanics. We need to understand them because we need to have this new theory down pat. We need to know what assumptions. It's making and these postulates form the basis for all of our understanding and for all the detailed calculations that we might undertake. The first postulate is this. At any time the state of a quantum system is described as fully as possible by the wave function psi which depends on the coordinates of all the particles that make up the system. These could be the electrons and the nucleus and an atom or something bigger like a molecule. But in any case if we know psi we know everything that it is possible to know about the system. And what that means, comment one, since the wave function contains all that we can know, it follows that most of the time we do not know the wave function because in order to know the wave function we would have to be making lots and lots of very clever measurements and usually we don't. So usually we just know the dog's in the yard but we don't know exactly where it is. So if we say we know the wave function of a system we're making a very, very strong assertion. Comment two, we're often interested in wave functions for quantum systems that are not changing in time. Like for example the properties of an isolated atom or an isolated molecule. And in that case we use a lower case psi to denote the wave function. But in some cases we might be interested in a time dependent phenomenon. And then we use an upper case psi and the problem is they look very similar, usually you put like a serif on the top to indicate that you're using the upper case psi and usually you set time apart as a separate variable. So time is an input to the wave function, you have to know the time to calculate the wave function. And you write it as I've written on the bottom of slide 52 here. So the question is how can we suppose that the particles, whatever they are, have exact positions R1, R2, R3 and so on when a couple of slides ago we just decided that there's an uncertainty in the position of any quantum particle. Isn't this kind of using sort of bad logic here to be assuming this? And the answer is no, it's only with respect to measurement that we have to worry about the uncertainty principle. And even then it's only for joint measurement of something like position and momentum along the same coordinate axis. X position, X momentum. The variables here, the XYZ coordinates of all the particles are better viewed just as simple parameters on which the wave function depends. Now once we've calculated the wave function then we can use it to describe all our measurements that we're going to make and magically everything comes out just hunky-dory. Postulate 2 makes an assertion about probability. It says now what this wave function means. The probability of measuring the position of a quantum particle at some position, let's say X naught within some small region, DX, small enough that the wave function doesn't change value very much over the small region is given by psi star psi DX or modulus psi squared DX for a one-dimensional system. For a three-dimensional system we have to integrate over all the spatial variables DX, DY, DZ or in polar coordinates DR, D theta, D phi and in any case now we have psi R naught squared times DV. Some books use D tau. I prefer DV because it reminds me that it's volume. So our first comment is what is the asterisk? What is the psi star? The answer is that the asterisk denotes the complex conjugate because the wave function is often complex. Now at first blush that seems like that might be a problem because you're saying that this thing that's associated with a particle has got an imaginary part to it. But in fact not really because I can write down a simple algebraic equation. Let's say X squared plus 1 is equal to 0 and that has a solution but it has an imaginary part. So if I just say well I don't want to have any imaginary numbers in the wave function, it might be like algebra. I won't have a theory of rooting polynomials or anything left over in my theory. So we accept this but of course we realize probability is real and that's exactly why we take the complex conjugate. So if you're given a complex number Z equals X, X is called the real part plus IY, Y is called the imaginary part, then the complex conjugate is obtained by just changing the sign of the imaginary part. You can think of a complex number having an X part and a Y part and the complex conjugate, the X part is the same but the Y part is reflected to the other side. So if it's up here, it goes down. If it's down here, it goes up and you just do that wherever you see I mechanically and it's very straightforward to do. So it's not a big deal in terms of calculation. And if you then work out what Z star Z is or absolute Z squared or the modulus squared, you'll find that it's X squared plus Y squared because you have to recall that I squared is equal to minus 1. And that's exactly what we want because then it's a real positive number and it corresponds to the length of something and usually when you have the length of something, that means you're going to be able to add them up because length adds and in our case it's going to be probabilities that are going to have to add up and they're going to have to add up to 1. Comment 2, the probability of finding the particle somewhere in the universe should be 100%. That is the particle shouldn't disappear. Now in some cases particles are annihilated and they turn into other things but we aren't going to consider those cases in chemistry. Those are for physics. In terms of us, if we have an electron, the chance of finding it somewhere has to be 100%. At least in principle. And that means that there's another constraint on the wave function. The wave function should be normalized. That is the integral from minus infinity to infinity of psi of X squared dx should be equal to 1. And that means that of course psi of X whatever it is has to be a function that we can integrate and it'll turn out that it should also be a function that we can differentiate as well. So we can figure out where things are going in time and so forth. And that means that psi of X is really in mathematical terms a very well behaved function. It's not any exotic mathematical function that would cause us problems. Of course we put these limits on the integral plus and minus infinity and we do not think the universe is infinite or rather we think the universe is not infinite. But the math that we do is much easier when we make the limits infinity. And this is often true in all kinds of fields where if you have an infinite sheet of charge it's very easy to calculate the electric field and so forth. And if it's a finite sheet it's much harder. There are terms to subtract. And if there's a funny shaped sheet it's really, really hard and it doesn't teach you anything necessarily different. And so just to get the principle down you usually take a simple case and often it's infinity. So we're going to assume that the universe is infinite. That won't make any difference for our calculations. Postulate three, for every observable property that we can measure, energy, linear momentum, position, angular momentum. There's a new player in the game. It's a linear Hermitian operator that acts on the wave function. So now this is a new object that many of you may not be acquainted with so I want to take a little bit of time and explain what's going on. An operator is like the big brother of a function. When I think of a function I think the function grabs an input number and then returns a function value. So it grabs a number and it gives back a number. An operator, so like y equals f of x for example. An operator takes in a function, the whole thing, and then gives a new function. And there are things that take in operators and give new operators and so you can keep going. But in terms of this course we don't need those other objects. So I've written here omega with a hat takes in the function f of x and returns the function g of x. In this course operators are going to be like gentlemen in the 1950s. They are not going to show up without a hat. And usually we just omit the extra set of parentheses. We keep the hat on so we know we're talking about an operator and not just a number or something else. And we simply write omega to the left of the function. Always to the left because it's acting on it to the right. Omega acting on f gives g. And operators can be as simple as just multiplying by a function by x or even a constant because that gives a new function. Or even multiplying by one so we get the same function back. That's still an operation. So here I've written x hat, the operator x hat operating on f gives the variable x without the hat times f. And that's the new function g. So if f is x then x hat on f is x squared and so on. Once the operator is done operating the result is a new function which just has variables. And which might be the same. So one of the things you're supposed to do in quantum mechanics when you see an equation with an operator in it is you're supposed to let the operator do its work and then get back to just functions that you can differentiate and integrate and so forth. So that's the goal. Don't leave the operator hanging around unless you have to. Comment four, operators have units. Multiplying by x is going to add length units to the new function. In a chemistry we have to be careful about units. We can't just be multiplying by things and not know what the units are. We have to make sure we get the right units whether it's energy or momentum or position. This leads to practice problem five which is the following. Does a wave function have units? And if so, what are they? Well, let's go back to what the wave function was. We know the integral of the wave function squared represents a probability. Probability is a ratio and has no units. So we wrote for a normalized wave function the integral of psi star psi is equal to 1. But the integral is against dx and dx is like x. It has units of length and therefore psi star psi, whatever it is, must have units of inverse length or length to the minus 1 and therefore since there's two of those guys and changing the sign of the imaginary part doesn't change the units, psi itself must have units of length to the minus 1 half power or 1 over the square root of length. For a three dimensional wave function psi has to have units of length to the minus 3 halves power. The expectation value or average value of any observable, once you know the wave function is given by an integral of a sandwich with psi star, the operator and psi on dx. This is for a one dimensional problem. The expectation value is usually denoted with brackets around the thing which means an average value or the value we expect with a very, very large number of measurements but no single measurement need ever return the expectation value. For example, if I flip an unbiased coin and I count 1 every time it comes up heads and I count 0 every time it comes up tails, then the expectation value, if I make a very large number of tosses, is 1 half. But we never get 1 half in any of the measurements we do. We either get 0 or 1. Here's a challenge if you're interested, a little bit harder. What's the expectation value for throwing a pair of dice a large number of times? If you can do that kind of problem, you may have a history in playing craps or other gambling games. Postulate 4, the only possible result of a perfect measurement is one of the eigenvalues of the operator corresponding to the measured observable. That's quite a mouthful. What does that mean? Well, eigenfunctions and eigenvalues are central mathematical objects in the theory and that's one reason why you ought to take math courses up to and including linear algebra so that you can learn about these things without having to learn them on the fly while you're also trying to learn something else about the subject. An eigenfunction is a function which the operator returns unchanged except for multiplication by a number with units. So first of all, here's the form of the eigenvalue equation. I've written here omega hat on F gives little omega which is a number with units on F. But the main thing is that F is the same function and omega is a constant called the eigenvalue. And what we want to do in quantum mechanics often is given an operator we want to know what its eigenfunctions are because the result of a measurement is always one of the eigenvalues of the operator. And if we don't know its eigenfunctions, we can't calculate its eigenvalues very easily. So to go back to the die, suppose we don't know anything about it, we might think well we could get one and a half for an answer because we don't know how it's shaped, we don't know what it is. But in fact, if we look at it, it's got numbers, integers, one, two, three, four, five, six. Those are the only values you can get by tossing a die onto a table. You can't get something else. And just knowing that those are the only six possibilities that you can have is knowing a ton compared to not knowing anything or thinking they can be whatever values you might dream up. So given an operator, what we have to do, we often have the task of finding all the possible functions f and all the possible values omega that make the eigenvalue equation true. And in mathematics, the set of eigenvalues is called the eigenvalue spectrum. That's just for you aficionados. Let's do a practice problem. Let's consider the derivative operator d by dx. What's the set of eigenfunctions and eigenvalues for this operator? Well, we set the operator to the left of the function. The function's unknown and the eigenvalue's unknown. So we're just going to say the derivative of f is equal to z times f. And this eigenvalue equation then, in this case, amounts to solving a first order differential equation. And my advice is that's another very good math course to take so that you know how to do it. In this case, it's fairly easy to do. We separate variables and we write df upon f is equal to z dx. And then we put integrals on both sides. And then we realize that z is a constant, the eigenvalue, that does not depend on x. And therefore, we can move z on the outside of the integral. We can look up the antiderivatives, use Mathematica, if you've had Chem 5 to do the integrals. Or you can actually put an equation like that into integrals.com online and it'll solve it for you. And well, you'll find that the natural log of f is equal to zx plus some constant because this is an indefinite integral. And if we exponentiate both sides, we find the function f of x is equal to e to the zx plus c. We can factor that out as e to the zx times e to the c. And then we can let e to the c be some constant, k e to the zx. And as a check, we can always substitute our solution into the original equation and see that it satisfies it. In fact, when I was doing differential equations, one of the most powerful methods was guessing. You guess the solution, put it back in and see if it works out. And that can often be quicker than to try to do it the forward way. So here let's do this. The derivative of f of x is the derivative of k e to the zx. And that's k times the derivative of e to the zx. Even I know how to do the derivative of z to the x of e to the zx rather. That's kz e to the zx. And that's just z times f. So we've shown that the operator operating on f gives a number z that doesn't depend on x times f of x. That's exactly what we have to do. So the eigenfunction of the derivative operator is the exponential function. And the eigenvalue can be any complex number z as long as it's a constant. And the reason why when you solve differential equations that the exponential function occurs everywhere in these solutions of these equations is because it's an eigenfunction of the derivative operator. One comment, the derivative operator is linear. The integration operator is linear. And by linear I mean this. The derivative of alpha f plus beta g is alpha times the derivative of f plus beta times the derivative of g for any functions a, f and g and any constants alpha and beta. However, the derivative operator is not Hermitian. And to go back to how we were going to characterize observables the idea was we had to have a linear Hermitian operator. Now we know what linear means. We have to figure out what Hermitian means. And it has to satisfy this following relationship. The integral of f star omega hat g is equal to the complex conjugate of the integral of g star omega hat f. So we put a star and we swap the order of the functions. If it follows that it's Hermitian. And here f and g are any reasonable functions that are integrable in the usual sense. Okay, that's quite a bit for today. So we're going to take a break and come back tomorrow. And we're going to pick it up on what Hermitian operators are, cover a couple more postulates and quantum mechanics. And then do a few interesting problems.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:00:20 Louis de Broglie 0:02:32 Where is the Particle? 0:14:49 Waves 0:19:20 Practice Problem: de Broglie Wavelength 0:21:16 Wavefunctions 0:29:16 The Postulates of QM
10.5446/18879 (DOI)
Hi and welcome to Chem 131A, Physical Chemistry. I'm Dr. Shaka and I'll be leading you through these series of lectures on physical chemistry starting with an Adams-Up approach. Quantum mechanics is our first topic, which is kind of a rude introduction to the subject. But here we go. I'll give you some general course information and then an introduction to quantum mechanics as seen through the eyes of a chemist rather than a physicist. We have slightly different viewpoints on some things. So here's a preliminary problem, practice problem one. Here is a jumbled word and the question is what word can you make? And even though there are five letters, you might find it pretty hard. But the answer is you can make tool. But you have to have that word in your vocabulary if you're going to make it. And if you're in the fashion business or you're working with fabrics, you may happen to know that is a type of fabric. But if not, you could play around with that word for a long time. Mathematical equations that we're going to be dealing with are quite similar. We have to rearrange symbols in ways that are legal and we have to somehow see where to go. That means two things. You have to have the basic vocabulary to understand what it means. And you have to have enough background material that you know what a legal move is. And I will assume that you've read in the book the background material, the fundamentals. And if you haven't, then that's your first task right now. So here's some general information. The lecture attendance is optional except on exam days. We'll have one problem do each Friday. And we only have one problem because they're quite hard actually. And we have a website and on it we'll have what's new. But avoid sending me email. Just ask in person after class. And warning, do not start the problem on Thursday evening. Please realize that it's one problem and it's multiple parts and it's really quite difficult to actually complete it. And so have a go, clarify your understanding, and then try again. The TAs and I will go over some of the problems at the end of the chapter and clarify any ambiguous wording of the problem. Homework counts for 20% of the grade in this class. I'm not a big fan of making exams count a lot. There will be two midterms and a final exam. And reading, I can tell you, is not the best way to learn physical chemistry. Reading in chemistry is like tying your shoes to run a race. You have to tie your shoes but that doesn't count. It's not training. What's training in chemistry is practicing, solving problems, visualizing what things look like, and trying to work quickly and accurately. The textbook we're going to use is Quanta Matter and Change by Atkins, DePaula, and Friedman. And we'll cover the first five chapters. But I'll warn you that textbooks are getting a little bit like Amazon.com. They're trying to be all things to all people. It's not necessary that you memorize lots of facts or small details or who did what. What's important is to just try to understand the ideas and develop an intuition. It can be done even in a field like quantum mechanics. So chapter one, quantum mechanics is the study of the small specifically things that we can't see. Though viewed closely, the world appears to be digital. In other words, there are small packets of everything. And atoms are the smallest unit of a particular element. Of course, even the ancient Greeks realized that it might be true that things had a smallest indivisible amount. Now we know that that is in fact true. But they're very, very small and very light. One neutral carbon third carbon 12 atom has a mass of only about 2 times 10 to the minus 23 grams. An electron is much, much smaller, has a mass of 9 times 10 to the minus 28 grams. And these very small things are very unfamiliar to us because we can't see anything that small. So human intuition is guided by things that are around the same size as us. Things that are much, much bigger than us or much, much smaller than we are are hard to understand. And we have to use careful experiments to try to figure out what's going on. And likewise protons and neutrons occur in integer units. You can have one proton or none, but you can't have half. And you can't have half an electron. This is pretty much similar to currency. There's a minimum amount of currency. It's different for every currency, but there's still a minimum amount. In the U.S. system, it's 1 cent. So you can have 1 cent, but you can't have anything smaller and have it be legitimate currency. Light, Newton actually believed that light also had a currency that there was a minimum amount that light was corpuscular because light seemed to travel in straight lines just like a bullet fired from a gun. But later investigations in which light went through narrow slits or pinholes showed interference phenomena very much like water waves. And so the work of Huygens caused Newton really to abandon his corpuscular theory of light because it didn't seem like that theory could explain this kind of wave phenomena. Waves have positive or negative phase, and so they can subtract as well as add. A particle can either be there, be positive, or zero, but it can't be negative in the usual sense. And so we wouldn't expect small numbers of marbles to somehow give interference phenomena if we fire them through slits. So for example, here's a picture of two slightly different wavelengths of a wave. And if we add them up, you can see that there are positions where in space here, because this is wavelength, there is a very small response, and then there's another place where there's a very large response, and then there's a small response, and so on. And this is very much similar to sound waves, for example, where if you tune a violin, you hear the difference when you aren't in tune. You hear this wah wah wah, this sort of beating, and you're seeing in this picture that kind of beating occurring, and it's a universal phenomenon with waves. White light, it turns out, has a mixture of different wavelengths, and that was sort of first surmised when you passed white light through a prism, and you resolve it into a rainbow. But that doesn't necessarily mean that white light is composed of different colors, because it could be that the colors somehow come from the prism. And so it was only really when you took another prism, and then you took the same rainbow and returned it into white light that people became convinced that white light was really a mixture of colors, and there was nothing coming necessarily from the prism. When you have a convex surface near a flat surface, there's a difference in refractive index, and the condition for these beats to add up or subtract depends on the wavelength, and this is a phenomenon called Newton's rings, a pattern of constructive interference that, for example, you can see if you spill oil, which tends to bead up on water, on a wet surface, and we used to always look at that as a kid because cars in Salt Lake City always had a lot of oil leaking out of them in those days, and it rained, and we would spend a lot of time looking at these beautiful patterns. So we can see, here's an example, we can see how the colors vary systematically with wavelength, and much like there were two beating patterns in that graph I showed you, this will repeat more than once depending on the total thickness of the contrasting media. There's another experiment that unfortunately made it difficult to understand light as waves, and that was black body radiation. So this is a classic thing where you think you understand everything, and you have a very simple calculation, and there are some very, very smart people like Lord Rayleigh, and you do the calculation, and then you compare with the experiment, this is the essence of science. If you think you understand something, you ought to be able to predict it or explain why you can't predict it, but at this time there was Maxwell's equations pretty much, people believed that these wave equations really described light in totality, and that light was an electromagnetic wave, and that theory was wildly successful. So we could explain the unification of electricity and magnetism, why the speed of light has its speed, diffraction, reflection, refraction, how lenses work, and so on and so forth. But the two crucial experiments showed that this description of light was incomplete, and the first one was black body radiation. So what is black body radiation? Well, if I take a black body, by which you could just mean a lump of coal or lamp black, and heat it up in a vacuum so that there's no air that doesn't burst into flame, it'll glow, and this glow gives a characteristic spectrum of colors just like white light has a characteristic spectrum of colors, and it was known that the color depended only on the temperature, hence we speak of red hot and things like that. And Rayleigh, with a correction by genes, calculated this spectrum of wave-like light, and what they found is that it would be much, much more likely that you would get a lot of high-frequency radiation because the chance of getting each frequency was equally likely, and there are a lot more high frequencies, than there are low frequencies. And this was called the ultraviolet catastrophe because it basically predicted that if we opened something like a kitchen oven, then outcome a ton of x-rays and very high-frequency light, and kill us basically, and that's obviously not what happens. So we, there was a big, big soul-searching, and there was a lot of thought about what might be going on, and some theories were put forward that were later proven not to be quite right, but it was Max Planck that found the correct solution. What he found that fit the observations, although he didn't really quite believe it himself, but he could follow through the physics, was that light was quantized. That is, there's a minimum amount of light, and the smallest amount was related to the frequency of the light, and once the frequency was chosen, the energy of the light was E equals H nu. H has subsequently been called Planck's constant in honor of his discoverer. So the frequency could apparently be continuous, anything you want, but once you choose it, then there's a minimum amount, and if you don't have the minimum amount, there's no light, and since the minimum amount depends on the frequency, higher frequencies, if there's not enough energy around, can't have the minimum amount to make a single particle or quantum of light, and therefore those frequencies get cut off. So that gets around this problem with the x-rays killing us if we open the oven. So here's an equation that just says the likelihood that you're going to have a certain amplitude of light, or number of relative amount of light per unit frequency given according to the Rayleigh-Jeans law, and you can see that it's basically a parabola in the frequency squared, so as the frequency goes up, the amount of light that's predicted is predicted to go up faster and faster and faster, and that's not what's observed. Now what Planck did instead of this continuous equation is this. He quantized the photon energy, and he obtained this formula which on first blush looks completely different. It has a cube in the enumerator, and then it has this funny exponential of h nu upon kT in the denominator, and we can compare these two formulas on a graph and see what happens. And in this formula, let me remark that k is Boltzmann's constant, h is Planck's constant, and T is the temperature in Kelvin, and of course in physical chemistry or chemistry of any type, you never quote temperature in anything other than Kelvin, because if you do, you're very likely to be wrong if you plug it into a formula. If you quote the temperature in Kelvin and you say it's a balmy day, it's 298 Kelvin, you may be eccentric, but you're never wrong. But in chemistry, if you use any other units, you're very likely going to be wrong, so you have to be careful. So here's a graph. Planck is in the green and Rayleigh jeans in the sort of pink color, and you can see that for a certain temperature of 300 Kelvin, they agree at very, very low frequency where the quantization is very small, and then as the frequency increases, Planck actually follows the observed distribution almost exactly, and Rayleigh jeans diverges more and more and more from it, and would continue to go up. Now, we can make a connection between the two theories, because we know they agree when the frequency is small and the energy is low, and so we can take this parameter H nu upon kT, and we can assume it's much, much less than 1, and we know from calculus that we can expand E to the X in a power series, 1 plus X, and then if X is small, then X squared is very small, so we can throw it away, and that means we can write E to the H nu upon kT as 1 plus H nu upon kT, and then if we make that substitution in, we find that as long as the frequency is not high compared to the temperature kT, that we get exactly the same formula that Rayleigh and jeans got, but if the temperature, if the frequency is high where there was a problem, then it starts to deviate, and so that's very nice because it shows you that you get the same result as the other guys got where their theory seemed to work, and that's what scientists often look for, of course. Now, the physical meaning of kT is that kT is really a measure of the random thermal energy that's available at a temperature T. When something's hot, things are moving, they're colliding, they're banging around, and there's lots of energy available to excite things and to create photons, but if the temperature is very cold, then there's hardly any energy around, and then you just don't have enough energy to make the minimum amount of a photon, and that's why cold things don't glow, but things that are heated up finally do start to emit a glow like an electric element on a stove. So, at high frequency, the problem is we just don't have enough energy to make even a single photon. We just run out before we get there, and so the distribution has to fall off sharply, and the analogy I can give you is that suppose the smallest amount of currency, the smallest coin, were $1,000, then that means that a lot of people are going to have no money at all because they don't have that much. The reason we don't notice the digital nature of light in day-to-day observation, we don't notice that it looks like sand or something like that, is because of the relative size of these two constants, K and H. Boltzmann's constant is 1 times 10 to the minus 23 joules per Kelvin, and Plank's constant is 6 times 10 to the minus 34, and that means there's a factor of about 10 to the 11 or 100 billion between them, and that means that usually at low frequency there are plenty of photons around, but at high frequency we start to notice that H nu, the quantum of light, has a minimum value. The other experiment that really sealed the deal with respect to the corpuscular or quantized nature of light was the photoelectric effect, and it's not quite such a simple experiment as the black body radiation, but nevertheless, it is a pretty simple experiment, and the experiment is this. You evacuate a chamber and you have a clean metal surface, and you shine light on the surface, and what was observed is that electrons would come off the surface, and this is how you make a cathode ray tube, in fact, but electrons would come off the surface of the metal and would come into the vacuum and they would be ejected, and ideally if light were a wave, then what should happen is if you have a wave coming in, it should sort of excite the electrons more and more and more and more and more and then, boom, finally just like pushing a swing, if you push it enough times, you can get the person moving. So that means that when you turn on the light, there should be a delay before the electrons are ejected, and you can measure that by chopping the light, turning it on and off as quickly as you need to, and the electron energy that comes out should depend on the intensity of the light, but what was observed are these three things. First of all, the photo electrons, when they came out, were ejected essentially instantaneously, so there was no delay. The second point is that below a threshold frequency, there were no photo electrons at all, and the third point is that turning up the intensity of a low frequency light made no difference. You still didn't get any photo electrons, and Einstein interpreted this experiment in terms of photons, namely that the particles of light were coming in, and each particle of light could hit an electron, and one particle hits one electron, and if that one particle that hits the electron doesn't have enough oomph to kick the electron out, then the electron doesn't come out, and having a lot of particles, none of which can, on a single hit, hit the electron out, doesn't help you. You need one particle that has enough oomph to hit it in one go, and he found that the particle had an energy exactly in accordance with Planck's formula. So now we have another experiment, which is indicating that light, depending on its frequency, has a quantized energy, and that we call that a photon, and we think of it as a particle. So as I said, if one photon hits an electron, the energy is good enough, it kicks it, otherwise the electron stays in the metal. So here's an idealized view of the experiment. I apologize for the kind of gray of the potassium is hard to see, but this is a potassium metal used because it's very easy to kick electrons out, and there are three wavelengths, and the energies are quoted too, in electron volts. An electron volt is the energy that one electron gets by being dropped through one volt of potential difference, and it's 1.6 times 10 to the minus 19 joules. If we have 700 nanometer light, which is red, we get no electrons. If we shine in green light at 550, we get electrons that come out, and they come out with a maximum speed that we can measure by timing when they hit a detector of about 3 times 10 to the 5 meters per second. And if we use more energetic light toward the violet end of the visible spectrum, then we also get electrons out instantaneously, but now the speed of the electrons is higher so that electrons have more energy. So here's a diagram that shows the energy balance. It takes a certain amount of energy, in this case, for a potassium, two electron volts to get the electron to part ways from the potassium atom, and then whatever energy is left over, since then we believe energy is conserved, even in this crazy realm of quantum mechanics, that energy must be then the kinetic energy of the electron, and so that makes it up to the top. And you can see that if the photon itself doesn't have enough energy to get up to the red bar, then there's no way that the electron is going to be ejected. And the energy phi is called the work function of the metal. It's two volts for potassium, but not for other metals, and the kinetic energy, as I showed, is just the difference. So we can express that in an equation. Kinetic energy of the electron is the difference between the photon energy and the energy to pry it out of the material, and that means that 1 half mv squared is equal to the difference, and so I can solve for v as a square root of 2 times h nu minus phi over the mass of the electron. And what we can see is that the mathematics here gives us a clue that we might have a problem because if h nu is not up to the red bar, if it's less than phi, then we get a negative square root, which would give us an imaginary velocity, which is kind of hard for us to interpret. Just because you get an imaginary number from an equation doesn't mean it's wrong. We're going to see plenty of imaginary numbers, but in this case, when we interpret it as a velocity, we'd have to figure out what an imaginary velocity actually meant in terms of what we would see. Now, it turns out that we can only be sure of the energy of the electron for the potassium atoms that are very near the surface so that the light hits and then ejects the electron. The potassium is a silvery surface almost like a mirror, and so we could guess and we'd be right that the light doesn't really penetrate through like a window and come out the other side. So if we eject an electron from an atom further down, the electron might come up and hit another atom and slow down and heat up the material. So we just look for the maximum energy electrons that come out, and that tells us what it is for the surface. And that incidentally lets us know that this kind of photoelectron spectroscopy is a very good technique to interrogate a solid surface because you will only eject electrons very near the surface of the specimen, and what that means is that you won't see stuff underneath. So in some cases, if you make an alloy or you make a material, what you find out is that the surface tends to be enriched in one kind of element, and the bulk in the center tends to be enriched in another kind. And that could be very important if you're designing parts that are going to fit together, and you think the surface has a certain kind of composition. And in fact, because some atoms prefer to be on the surface because of the way they bond, the composition is much, much different. And in that case, you can use photoelectron spectroscopy and you can do this ancient experiment. And since now all the work functions are known for all the atoms, you can easily figure out who's there. Usually, we know the wavelength of the light. And since the wavelength times the frequency is the velocity which for light is given the special symbol C, we can also write the energy in terms of the wavelength, and here's the equation. The energy of a photon is H times C over lambda. Finally, if you take very energetic photons like gamma rays, then the speed of the electron could approach C, the speed of light. And in that case, we have to use a different formula which I won't derive, but we have to use what's called the total relativistic energy which is given by this formula, E squared equals P squared C squared plus M squared C to the fourth. And I think you can see that if P vanishes, then the rest mass, if you have no other kinetic energy, is E squared equals M squared C to the fourth, and that's where E equals MC squared came from. M naught is just called the rest mass of the electron. It's the mass when it's not moving. And the relativistic momentum P is still just M times V, but M is not the rest mass but rather gets corrected by this formula that includes the ratio of the speed of the particle versus the speed of light. And interestingly, this formula also lets us figure out the momentum of a photon. So we start with the formula, and then we note that a photon has zero rest mass, and therefore, the energy is just E squared equals P squared C squared. And we can take the square root of that and find that the momentum is E upon C, and since light's quantized, that's H nu over C, and since C is the frequency times the wavelength, we just end up with E with, sorry, with P equals H over lambda, which is the momentum. And that means the short wavelength photons with small wavelength have high momentum. And here's an application of this phenomenon. Here's a spaceship that set sail a couple of years ago, and it's a giant solar sail. Of course, you don't have to worry about air resistance if you're out in the middle of space, and you can, with a reflective coating, you can actually use the momentum of the photons from the sun to steer your spaceship around, and you don't need any fuel or anything else, and you could just use the sun itself to push you around, and you can turn things this way and that. It's kind of an interesting application of the photon momentum, and I'll let you speculate about how fast you think that this spaceship could go in open space. Now, there is a thing to take note of, and that is, even if you're going pretty fast, you don't have to worry about relativity unless you're about 10% of the speed of light. If you're 10% of the speed of light or something like that, then you may have to start worrying a little bit about relativity, but normally in chemistry, we don't worry about relativistic corrections. So the only point of introducing this formula was to show that a photon, although it has no mass, has a momentum. So let's do a practice problem. So let's just confirm that the speeds that we quoted on the potassium photoelectric effect diagram are, in fact, the right speeds. So, well, we could use either the wavelength of the light or we could use the energy in electron volts. And as I told you, 1EV is 1.6 times 10 to the minus 19 joules. But since we have the work function for potassium in electron volts, I think that's probably going to be the easier course. So for the 2.2EV photon, the green light, we can set up our energy balance equation that the kinetic energy is the difference between the photon energy and the work function. And then we can solve for the velocity or the speed of the electron that comes out. And you notice that when I solve it, I put in the units. In chemistry, it is very, very important to put in the units and make sure all the units go away except the one that you want to get in the end. So I'm taking 2 times 0.25EV, dividing it by the mass of the electron in kilograms. I'm converting EV to joules. And then I'm remembering that a joule is a kilogram meter squared per second squared. I can remember that because I know that force is mass times acceleration. Acceleration, I can remember, is meters per second squared. So force is kilogram meter per second squared. And a joule is a Newton meter. And I can remember that too. And so I add another meter. And now I see the kilograms go away, the EVs go away, the joules go away, and I have the square root of meters squared per second squared, which is meters per second. Whenever you do a problem in chemistry, you want to analyze it in exactly this way. If you just write down numbers with no units, you're very likely going to have some funny units left over like the square root of EV over joule or something else. And without the units there to let you know that they didn't all fold up like a hat trick, you just get the wrong numerical answer. And if you write the wrong numerical answer on an exam or submit it in a report or build a bridge with it and it falls down, nobody is interested in why that happened. They're only interested in the mistake. So we went through this and you can see that the way I quote it is I keep a lot of digits and then I put the ones that I don't think are significant into parentheses. And then I can round it to give a 3 times 10 to the 5 meters per second. And the same thing with the violet light, you just put in a slightly different number. And again, you get 6.2 times 10 to the 5 meters per second. Always retain insignificant digits in case you want to continue the calculation further on. And never, ever round in the middle of a calculation. Your calculator will hold 12 digits, 12 digits. It will hold 15 digits. Everything's 15 digits. Never round and you can see in my examples, I take the time to write 1.602, et cetera. I look up the exact value because if I do a lot of calculation and I start rounding things here and there and everywhere, by the time I get to the end, my accuracy is poor. And sometimes if I'm unlucky, it can be very poor. So at least keep all the digits. People kill themselves trying to get those digits on those numbers, that was years of work. And to just say, well, I can't even be bothered to punch them into my calculator is really almost a crime. So in many experiments, light behaves like a wave. And the question is, if it behaves like a particle in these experiments, but it behaves like a wave with two slits and other things like that, then which is it? Is light a particle or is it a wave? And the answer is, it apparently depends on the nature of the experiment. Light itself seems to have both qualities at once. And even though we think of them as completely different kinds of things, apparently these two qualities are not mutually exclusive. But if light is a particle, then the thought is that if I shoot one particle at a time with the two slits, that the interference phenomenon would have to go away. Because the reason we were getting the interference phenomenon is we were getting all these waves going through together and then they would add and subtract. But if they aren't there simultaneously, if there's only one particle going through at a time, pick, pick, pick, pick, then we would see just two piles. You could either go through this slit, you get a pile here, go through this slit, you get a pile there. But interestingly enough, this fails. And so this two-slit experiment where you get an interference pattern, you get exactly the same pattern even if you can verify that you're shooting photons one at a time. So you shoot one photon, another, another, you detect where they end up. And at the end of the day, you get an interference pattern. And the only way you can really try to explain that is it seems like the photon, which you're claiming is a particle when it suits you, is now the particle is somehow slipping through both slits. In other words, the particle can interfere with itself. This is a very, very foreign idea in terms of what we understand in the physical world, where if we have a particle and we, the particle goes through one slit, we know it went through that slit. And it doesn't somehow break up and then reconstitute itself on the other side. And this was one of the most discomforting things about this new theory of quantum mechanics because it seemed to suggest something that was very foreign to our physical intuition and even foreign to our common sense notion about what a particle is. So if we fire one photon at a time and do photon counting, what we find is we have to fire a lot of photons to get good statistics. But when we do, if we have one slit, we get the pattern on the left. And if we have two slits, we get the pattern on the right. And we get the pattern on the right with two slits, whether we fire the photons one at a time and take forever to do it or whether we fire a bunch of them at once and they all go through. And that means that whatever the wave nature of light is like, it doesn't seem to have much to do with water waves. Even stranger, an electron has a certain amount of charge. It has a certain mass. And I've never seen half an electron. But if we fire electrons now from an electron gun, like an electron microscope, and we fire them one at a time, and then we look with two slits at what happens, we would expect to get, again, two piles of shot. If the electron went through here, it would go here and it could have a certain angle. We get a big pile. And then we'd get a pile over here for the ones that apparently went through this slit because we can't control exactly like a marksman where the electrons go. So they may go through. But whichever ones go through the slits, we should get two lumps. If you do this and you fire them one at a time again, what you find is that you do not get two lumps. So what you find is shown on the next slide. This is a brilliant experiment that was done at Hitachi using an electron microscope. And what you can see here is just how interesting it is because when you have 10 electrons, you have just 10 spots. And it looks to me like they may have slightly miscounted or there may be a glitch because you can see 11 if you look closely. But anyway, you get 10 spots and you notice that when the electrons hit the screen, you get a spot as if it were a very tiny particle. And the slits are much farther apart than any kind of width of these dots. And then when you do 200 electrons, you get a shotgun pattern. And then when you do a lot more, you start to see ridges like a wave. And then when you finally do hundreds of thousands of electrons, you see this clear kind of tin roof appearance of the pattern of intensity which even though you shot the electrons through one at a time, seems to be indicating that each electron goes through both slits. This is even worse than the photon because I don't have any particular picture about a photon. It was a mathematical thing that came up. That doesn't bother me maybe too much. But I certainly do have a picture of an electron. And if the electron is interfering with itself, the question you might ask is, well, which slit did the charge go through? Which slit did the mass go through? Why is it that whenever I look at an electron, I see exactly the same mass and exactly the same charge. And then when I fire them through these two slits, I do not. And the short answer to that is that if you look at which slit the electron goes through, you get two lumps of shot. So if you try to intercept the electron and you try to see it by looking, then you get two lumps of shot and the electron says, okay, you're going to look at what I'm doing. I'm going to go through with the left slit or the right slit. That's it. But if you don't look, which of course they were not looking, they had two slits set up, they fire electrons. If you look, normally we think, well, if I want to see the rug, I want to see the monitor, I want to see whatever, I simply look at it. But in fact, what's happening is we've got light on. We've got light on and because I'm so heavy and the light's so light, it's not moving me around. It's not doing anything to me. But in fact, if I've got an electron going through a slit, I can't just see it. It's too tiny. I need to shine some light on it. And when I shine light on it, I know the photon can even kick an electron out of a metal. And so the photon interacts with the electron and changes it. And so by trying to observe where it is, I actually change the nature of the experiment. And so it's very frustrating because when I don't look, it does something incredible that I can hardly believe. And when I do look, it behaves exactly the way I would have thought that it would be behaving. And what we'll do then is we'll close up there for this lecture. And in the next lecture, what I want to talk about is the connection that a man by the name of De Broglie made between the wavelength of these particles and the wavelength of light, which seemed to be a major advance. And it's kind of interesting that that's the one thing he did. And then he did that, and that was great. And then he never did much else after that. Okay, thanks very much.
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:05:31 Light 0:12:05 Quantization 0:19:10 The Photoelectric Effect 0:28:59 Photon Momentum
10.5446/18873 (DOI)
Welcome back chem bio fans. Today we're going to be talking about proteins and then we're going to go on to much sweeter topics as we transition to glycobiology. So I've got some really cool stuff in store for you today and I can't wait to get to it. Specifically we're going to be talking about glycan structure. These are the carbohydrates found on the surfaces of your cells. The things that make life taste sweet. We'll be talking about their reactivity, how enzymes can hydrolyze them and then we'll talk about what it is that they're doing before we talk about why they're ultimately going to kill us. So why is it that carbohydrates, sugars taste so sweet yet cause so much disease is something that fascinates me. I'd love to understand that better and let's get started. Hopefully we will at the end of this. Oh, a little picture. I have a picture of the great Herman Emil Fischer in his laboratory around 1900. This guy is a superhero of chemistry and he was doing things like working out structures of carbohydrates. Just years after people had figured out that carbohydrates actually, that carbons actually had stereochemistry. So this guy is a tour de force chemist, a giant in the field of glycobiology and chemical biology and he's one of my heroes. Okay, some quick announcements. I'd like you to read chapter 7, work the odd problems and then you're going to be receiving back your journal article reports sometime either today or maybe in discussion section. They are graded out of 75 points. I went for something like a bell curve but I'll be honest. They were really good. I got some fantastic journal article reports. They were tough to grade. It was hard to come up with a standard distribution. I like to see that. So nice job on those journal article reports. You're all to be commended. Excellent job. All right, so here's what we saw last week. Last week I was telling you about how it is that proteins function. What it is that makes them so great when they do the things they do. And what we saw was that enzymes work by lowering the transition state energy for a reaction. Doing this, where you take a transition state energy, it's up here and push it down here has the need effect of accelerating the reaction, speeding it up, pressing the go pedal to the floor. This is really cool. This is powerful. This makes biology possible. So because these enzymes evolved to bind to transition states, analogs of these transition states are very effective as inhibitors of the enzyme. They stick in there and they plug it up like a hole, like plugging up a hole in a door or something like that. They are very effective at shutting down enzymatic reactions. And in fact, we're going to see a few more examples of that today. In addition, we saw last week how enzymes take advantage of their diverse functionality of the amino acid side chains. I showed you, for example, examples of metal ions, magnesium providing Lewis acidity in the case of kinases. We talked about how lysosine has both Bronsted acid and Bronsted base functionalities. And the two of them were like a one-two punch where the acid was up here and then the base was down here and then they switch places. This is truly remarkable chemistry in action and it's one of those power tools in biology. The next thing we saw is that, or this is what we're going to see today, is that additional functionalities from vitamins, cofactors, can expand the range of chemistry. So we start with the easy stuff, the Bronsted acid, the Bronsted basis, but just you wait. I've got more functionality for you and this other functionality is super cool because it allows a whole new range of reactivity and access to reactions that otherwise would not be possible in water. And then finally, the major theme really of our last lecture is this concept of dynamics. And this is one of the challenges that has tormented me really for years. I've always, hmm, always is too strong a word. I've wanted to, for the last 10 years, understand why it is that enzymes flap around the way they do and how does that flapping aid in their catalysis. And what I showed you were the sort of cutting edge theories about how it is that these things work. And I showed you examples where that flapping is both beneficial and deleterious for enzyme function, right? We saw an example where the enzyme protein kinase A was munching away in a neat waltz and we even called it a waltz, right? Because it had three steps, one, two, three, just like a waltz. And then sometimes it got stuck on step one, two. And it was doing almost like a rumba. It was going backwards and forwards, one, two, one, two, and not going into the waltz like it should have been. And that's a disaster for the cell because when that happens, or it's a disaster for the enzyme, when that happens the enzyme is not going all the way to make its product. It's getting stuck. And that stuckness is inherent to the function of a protein kinase like protein kinase A. This is a protein that evolved not necessarily to be a monster of catalysis. It evolved to be a highly regulatable molecular switch where you can switch it on and you can switch it off. And that ability to get it stuck doing one, two, one, two, where it should be doing one, two, three, one, two, three is useful. That's the kind of ability that allows the protein kinase A to be a useful tool in cell biology. Okay, so hopefully you're getting some idea of how enzymes work. I want to pick up on a topic I kind of skimmed over, but it's super important and I feel I felt guilty frankly for skimming over this. This is just too cool and too interesting. So let's go back and take a quick look. I want to start by talking about a class of enzymes called serine-based proteases. This is a class of enzymes like the cysteine proteases that we saw last time that relies on having a serine functionality in its active site. And that serine functionality has an analogous role to the cysteine found in the active site of the cysteine proteases. In both cases, the serine or the cysteine, in both cases, those functionalities act as a nucleophile to attack the amide bond that's going to be hydrolyzed. And so let's take a quick look at this. In the active site of serine proteases, there is a catalytic triad consisting of the serine functionality together with a histidine and together with a spartic acid. And these have the remarkable ability to act as a protein relay system where the protein gets handed off from one functionality to the other. And by doing that, that actually allows the serine to be a much better nucleophile. So check this out. In this case, what we're seeing is that the serine is getting deprotonated by the nitrogen of this amidazole of histidine. And in turn, the amidazole is passing off its other proton to a nearby carboxylate functionality of an aspartic acid. So again, it's a relay system. Proton comes off here, gets passed through an intermediate, passed over to this other guy over here. This is powerful stuff. Okay, so now the resultant alkoxide is a super nucleophile that can then attack the carbonyl of the amid functionality. And in turn, this tetrahedral intermediate can collapse to give us a hydrolyzed amide bond. But wait, there's more. Of course, we now have a covalent intermediate where the serine of the enzyme is stuck as an ester to a fraction, to a half of the hydrolyzed amide bond. And so what the enzyme does is it turns on this machinery and this time it operates to hydrolyze this ester functionality giving you, in the end, a returning us to a serine functionality and giving us a carboxylate as the second half of the hydrolyzed amide bond. This is pretty cool chemistry. I'm showing you chemistry where we're seeing a charge relay system. There's a lot of subtleties here that I'm kind of glossing over. But in the end, this is a very effective cutting machine. This guy gets out there and it's like Edward Scissorhands going to town. This thing just starts chopping apart. And in fact, it's working right now in your stomach. Maybe not my stomach. I'm kind of hungry. But if you have something in your stomach that involves proteins, these guys are at work chopping apart those proteins as we speak. All right. Now, this is the part that absolutely amazes me, astounds me and keeps me dreaming at night. This is the catalytic triad up close. So check this out. In this case, I shown you the arrows. That's the same thing I showed on the last slide. The truth is though, these arrows are kind of an approximation that we use and instead a better picture, a better depiction of this would be to have these hydrogen bonds between the serine functionality to the histidine and to the carboxylic acid. The problem is these hydrogen bonds take all the joy out of arrow pushing. They suck out the marrow of what it makes it so fun to push arrows and send up protons and electrons flying around. And so this is an accurate depiction. This resonance structure is an accurate depiction or this equilibrium is an accurate depiction. But I like drawing it like this. And so the convention that we're going to use is an understanding that when we draw it like this and we have one step, that one step is occurring in a concerted mechanism with one fell swoop. Swoop. Okay? So this isn't going bonk here, takes a step, waits a little while, bonk here. No, instead this thing is with one fell swoop stepping over and doing the whole reaction all at once. All right. Yeah, we'll skip the zinc proteases. It pains me to do this. They're fascinating. I'd like you to read about this topic in the book. These are really cool too. All right. Something that we saw is that many proteases have a pro enzyme arm that protects them from turning on until an appropriate time. And this is especially important for the proteases implicated in the blood clotting cycle. Because you sure as heck don't want your blood clotting before, you know, at random times during the day, right? That would be a disaster. And so we saw that peptides in the pro enzyme can be very effective at blocking the active site. For this reason, proteins in general are very interesting as blockers of protease active sites. And these are found in the blood-sucking animals of the world. These are things like leeches, ticks, vampire bats. And when these little blood suckers grab onto you, they're, you know, they got the fangs, they're grabbing on, they punctured, they're sucking the blood. They want to prevent blood coagulation, right? They're trying to prevent blood clotting. If the blood clots, they don't get a tasty dinner. And so they've evolved inhibitors like this one that block the active site. The paradox here is that, of course, protease has equally evolved to chop apart proteins. And so the real trick of these types of inhibitors, these protein-based protease inhibitors, is that they get into the active site, but they stay away from the catalytic triad that I showed earlier, which is sort of highlighted here. A little hard to see, perhaps, but it's in there. And so by keeping back, they managed to avoid getting hydrolyzed, yet they take advantage of the abundant molecular recognition opportunities that span this protein and zoom all the way around it. And doing that gives it a very tight binder. And that's critical, right? If these things aren't really tight binders of the proteases, it's game over and the blood sucking animal is going to go hungry. All right, so this is the kind of thing that interests chemical biologists because it gives us control over enzyme activity. We're really interested in developing inhibitors of proteins and enzymes that allow us to add the inhibitor at a specified point, shut off its activity, and then study what happens. This gives us a powerful tool for figuring out the function of that protein inside the cell. And so I'll give you some examples. Covalent or mechanism-based protein, protease inhibitors are often irreversible, not always. And here's one example. This is a chloromethyl ketone. Okay, notice this is ketone, chloromethyl functionality. And it positions beautifully this electrophilic chloromethyl substituent right up close to the nucleophilic histidine functionality. And it also has the ketone, which can act as an electrophile to attract this serine alkoxide. So the serine alkoxide doesn't look too carefully at things, thinks this is an amin bond, and then gets in there and goes to town and attacks because that's what it was devolved to do and it attacks there giving it a tetrahedral intermediate which binds very effectively to the enzyme. But at the same time, this chloromethyl is positioned neatly for an SN2 reaction, an SN2 attack on this methyl functionality, this methylene functionality over here giving us another covalent bond. And so in the end, this enzyme is shut down. It can't do anything. It's got a covalent bond to this side and a covalent bond over here and it has the enzyme in a bear hug grip. It's totally stuck. It's not going anywhere. This is game over for that poor enzyme. Okay. So it turns out that because these enzymes have such nucleophilic functionality, there are many ways of inhibiting enzymes that take advantage of this nucleophilicity and do this using superb electrophiles. Okay. So it's electrophile, meets nucleophile, and it's like the yin and the yang of chemistry. Magic happens and we see inhibition. So let's take another look at this. Here is a related reaction. This is a serine esterase that hydrolyzes this acetylcholine. Okay. So here's acetylcholine over here. Acetylcholine is found in the synapses between your nerves. It's the junctures between nerves. It's one of the ways the nerves talk to each other and it's one of the ways that you can actually, you know, move it around. Okay. So the fact that I can move around, it's due to acetylcholine. And it turns out that the enzyme that breaks down acetylcholine hydrolyzes this ester bond right here. Okay. So little enzyme comes in, snips this off, shuts down the acetylcholine from signaling. Oh, and I should say that the mechanism for this is very similar to what we saw when we talked about serine proteases. So everything I've shown you on the previous slides still applies here. But check this out. There's a series of incredibly toxic, really, electrophiles that get into this enzyme active site and permanently cap it and shut it off. And the problem there is that now you have no way of hydrolyzing the ester of acetylcholine and the effect is paralysis. Okay. So here's what I'm talking about. I'm talking about a series of nerve agents. These are things like serine, tabin, VX. These are insanely toxic electrophiles that get into the active site. And here's the alkoxide. Here's the electrophile. And in the end, we have a covalent bond to that serine nucleophile. The effect is death. Okay. So that's why these nerve agents cause death. You're paralyzed. You can't get your muscles to move. You can't get your heart beating. And it's game over for you. That's why these compounds are so dangerous and scary. All right. So this sort of is our quick overview of serine proteases. And for that matter, catalysis of the hydrolyze, catalysis by the hydrolyze class of enzymes. I want to switch gears. I want to talk to you next about enzymes that use cofactors. These are sometimes called vitamins. This is the way that proteins expand their functionality beyond the 20 naturally occurring amino acids. If we had to rely or if the enzymes had to rely on the 20 naturally occurring amino acids, you know, life would be boring. They would, maybe they do it. But the truth is these other functionalities that act as cofactors bind to the enzymes, participate in catalysis. These cofactors, these vitamins dramatically extend the abilities of enzymes to catalyze reactions that they otherwise wouldn't be capable of catalyzing. And many of these are familiar to you. Especially if you take a multivitamin tablet every day. And, you know, these are the vitamin B's. Here I have a, yeah, so these are a bunch of vitamin B's that I'm showing on this slide over here. And why don't we zoom in and take a quick look at an example of this? Here's one example. This is vitamin B3. This is nature's sodium borohydride. This thing works great. Why doesn't nature use sodium borohydride? Well, you know, sigma and aldrates didn't exist. Back when nature was working this stuff out. But equally importantly, borohydride, aluminum hydride, those things aren't exactly stable in water unless it's cyanoborohydride. Okay, so for the most part, hydride's not so stable in water. So this evolved as a hydride source that is still stable in good old H2O. All right, so check this out. What this enzyme does is break down alcohol, ethanol, down to acid aldehyde. The enzyme is called alcohol dehydrogenase and it relies on this vitamin B derivative called NADH. Or this is actually NAD plus. Okay, and in doing this, this breakdown of alcohol is absolutely critical. By the way, this is the same alcohol that you find in Mickey's Big Mouth, malt liquor. Okay, that's the stuff. This is the stuff that your fraternity friends are drinking while you're out studying on Saturday nights. This stuff works, you know, is toxic to humans. It causes all kinds of problems. We've talked about that before. So this is the reaction that detoxifies alcohol, the first step in the campaign to detoxify alcohol and make the stuff go away so the headache disappears. But in order to do this, the enzyme has a remarkable transformation that I want to share with you. Okay, and all these experiments were worked out by the great Frank Westheimer about 50 years ago and back in the day when he's working this stuff up out, it wasn't exactly clear whether enzymatic catalysis should be stereospecific. There's no reason really for a reaction like this one, a trivial reaction really to evolve to be stereospecific. Okay, but it turns out it is stereospecific. And let me show you what I mean. So if you feed the enzyme alcohol that has these two deuteriums, next, you know, gem deuteriums next to each other, you'll end up with a deuterium atom placed stereospecifically in the NAD. Okay, so now this is NADH but it has one deuterium and notice the deuterium is coming out towards us. Okay, so just imagine that, deuterium coming out towards us right now. If you take this stereosynthesize NADH and feed it to this enzyme lactate dehydrogenase, lactate dehydrogenase uses this as a starting material to do this reaction on lactate, this molecule over here. And so what's happening is the deuterium gets stereospecifically inserted into when it reduces this ketone functionality. So what ends up happening is you get perfect stereospecificity for this second reaction over here. And this is beautiful stuff. Okay, this makes my heart go bitter pattern. Now here's why. What we're seeing in this very elegant experiment is that enzymes are pulling off hydrides in one way and then delivering them in another way. And they're doing this with perfect stereochemical fidelity every time. Okay, bitter pattern. Okay, let's take a closer look to try to understand that. When we look very closely at the atomic details of what's going on, actually everything I just told you totally makes sense. More bitter pattern. The origins here, this is the active site of alcohol dehydrogenase. That's the enzyme I showed on the previous slide. There is a zinc ion in the active site. And this zinc, Mr. Zink acts as usual as a Lewis acid. And here it is neatly, you know, tied together with two cysteines, a histidine up here. And so this zinc grabs on to the alcohol. So it's going to form a nice Lewis acid arrangement with the oxygen of the alcohol over here. And the vitamin B3 derivative, NAD plus, is present. But it's stationed below the zinc and below where the ethanol molecule is going to fit in. Okay, so this is a plane down here, up here we have the zinc. And then in between the two we have the ethanol. And so when the hydride hops off the ethanol, it has no choice. It only has one route available to it. It's going to go hopping down here and down here. And it's going to attack specifically every time the top face of this NAD plus. Check it out, it's going down here. It's like, you know, this is on a, you know, a water slide at Raging Waters or something. It doesn't have any choice to go in some other direction. It's stuck in the chute. And so because it's directed, it's stationed in a particular arrangement. The geometry is such that it only gives you a hydride that's introduced over here. The reverse reaction is equally cool. The reverse reaction has also stereo specificity. It is going to every time pick off a hydride from the top face. Only the top face is available. Bottom face, not available. Top face available. And that's really key to understanding the observations that Frank Westheimer made. And the beauty of this, to summarize the beauty, is that this gives us an atomic detail, precise mechanism for understanding why it is that enzymes catalyze reactions with stereochemical fidelity. And again, this enzyme doesn't have to be stereochemical, you know, perfect every time. But it evolved to be. And that's more or less what we're going to see time and again. Okay, so again, here we see vitamin B3 extending the functionality of native enzymes. That's pretty cool. Here's another example. This is a friend of mine called vitamin B6, Tasty Little Bugger. This guy does all kinds of reactions. It makes cameos in all kinds of different enzymes here and there. And then every time it's providing crucial functionality that equip the enzyme with abilities that it otherwise would not be able to acquire. Let me show you some examples of this. This is a decarboxylation reaction. In this case, in the enzyme active site, the, this paradoxal phosphate forms a shift base with an amino acid. And that sets you up for this decarboxylation reaction where the amino acid is decomposed. And this happens, for example, with glutamic acid to form GABA. The neurotransmitter, this is the sort of thing that goes on that takes place in a lot of different cases. Here's another one. This is hydroxymethyltransferase. This is aminotransferase. Notice in every case the arrow is ending up on this carbon over here. Okay? So we're going to build up some negative charge on this carbon that's adjacent to the shift base. And you're probably wondering, where are those electrons going to go? Really, really. I mean, are they going to really hang out there? What's so special about something that's alpha to a shift base? Well, these shift bases are analogous to carbonyls. So check this out. The electrons up here, that's the negative charge that results from each one of these eras, the electrons up here can hop, hop, hop, hop, hop, all the way down to the positive charge down here. Check this out. This is a resonance structure. Again, hop, hop, hop, hop. And in the end, they bounce their way all the way down to the positively charged nitrogen, which we know doesn't like having positive charge. It's electronegative. And the net effect is that this has, this stabilizes this negative charge up there. It makes this reaction possible. Otherwise, this reaction is not going to go. There's no go there. And so this is really powerful chemistry. And it's no surprise to us that, it should be no surprise to us that this makes multiple, multiple appearances. Let's zoom in. I want to show you the decarboxylation in a little greater detail. Here we have, this is a decarboxylation of a hydroxylated tyrosine called DOPA to give us dopamine. Made famous, of course, by the fantastic movie Awakenings, starring Robert De Niro. You haven't seen that movie. You owe it to yourself to rent it, especially if you want to go into medicine. Okay, so check this out. Again, we have PLP grabbing on to the amino acid using the shift-base handle that we talked about earlier. And now, what's happening is we do the decarboxylation. We get the negative charge and the electrons bounce, bounce, bounce all the way down here to be stabilized. There's a whole resonance structure. I'm not showing, but it's happening. And then, this can then get hydrolyzed over here to give us back the free amine. Okay? So shift-bases, recall, are reversible. The bond forms, the bond breaks, the bond forms, the bond breaks. That sets you up to do a reaction and then release the product over here. Okay, one more in the PLP world. I can go on all day. I love PLP, but I'm going to show you one more. This is a really cool one. This is an example of a transamination. So these are amino acid, amino transferases. They take off the amine from one amino acid and then hand it off to another. They do that using PLP as a cofactor. And this makes paradoxal amine phosphate. That's the transiting intermediate that grabs the amine and makes this possible. So here we are. We have an amino acid. It's bound up as a shift-base. As usual, in the enzyme-active site, there's an amine that acts as a base, deprotonates. And we see this base that pulls off the alpha proton over here. We see it now acting as an acid. Okay? This is Jekyll and Hyde kind of stuff. Right? Base acid. Base acid. Jeez, it cannot make up its mind. But by doing this, this kind of versatility and reactivity equips the enzyme with really powerful abilities. Okay? So here we go. Base over here, acid over here. That gives us a new shift-base. Check this out. Now, when we hydrolyze this guy, we now have lost the amine that used to be on the amino acid. This gives us a new ketone. Okay? So again, shift-base acting as a reversible functionality and doing all kinds of cool chemistry. Okay? Now, you're probably wondering, what happens to this now weird PAP? This PAP shown here can then be used as an amine source with a ketone, a different ketone, a different keto acid that then becomes an amino acid. Okay? So in this case, we see the amine getting stored transantly as PAP. And then it hands off the amine to another amino acid. Deform another amino acid. All right. At this point, I'd usually ask you if you have any questions. I imagine you're in your fuzzy slippers hanging out, drinking margaritas or something. I don't know what you're doing. I don't want to think about it. But if you have questions, you email them to me as usual or you ask the TAs or you come to my office hours, et cetera. Let's change gears. I want to change gears. I want to talk to you about protein engineering. This is a new field, a relatively new field. And it has unfortunately a terrible name. The name isn't a very accurate description of what it involves. It does involve proteins, but it involves a very bizarre type of engineering. So most of the time when I think about engineering, I think about, you know, building buildings or, you know, engines for cars or something. And in those cases, we understand to an extraordinary degree really, the properties of the materials that are being used to build the stuff, the buildings, the engines, whatever, proteins it turns out, you know, are made out of these floppy materials. And we still don't understand all of their aspects of folding and all of their aspects of molecular recognition. And so it makes it very hard for us to do atom by atom protein engineering. It turns out that's actually pretty non-trivial. However, despite those challenges, scientists have been doing this for several decades. And they've been doing this with the goal of improving protein function and also understanding how proteins work. Why don't we take a look at the second example first? Okay, so here's an example of the kind of mutagenesis that protein engineers do to dissect how proteins work. Okay? So in this case, you start with some amino acid side chain and you convert it into an alanine. How do you do that? You change around then coding DNA, results in altered RNA, which results in mutant protein or protein variants, as I like to call them, because mutants really should refer to the DNA at the very top. Now, here's what's great about this. If you do this mutation, you basically remove this hydroxyl right here. Notice that? Notice we used to have hydroxyl? Now we have a methyl group. Okay? So hydroxyl is gone. It basically gives you a way of removing all of those atoms past the beta carbon. And so now you can ask, what function, if any, did the hydroxyl group contribute to this big complicated protein over here? Okay? So you're taking out 16 Dalton's of molecular weight out of something that might weigh, you know, 44 kilodaltons or something, and you're asking, what is that oxygen really doing for you? This is a technique that I like to think of as the equivalent of reverse engineering. You know about reverse engineering. This is when Mercedes buys up a BMW and then, you know, proceeds to shake it apart. Maybe they remove some wire and then ask, how does the BMW perform under snow conditions or whatever? Okay? So reverse engineering is a powerful technique, and protein engineers have been using it for years. I've already shown you how human growth hormone dimerizes the human growth hormone receptor. But it, and I also even showed you pictures that look like this one of the hotspot of human growth hormone. I did not tell you, however, how it is that we know what we know about how growth hormone works. So here's the way this, these were experiments done by Jim Wells and co-workers at Genentech, and what they did was mutate all of the buried residues in growth hormone. It turns out there are 19 of these side chains of growth hormone, and they systematically went through and mutated each of those back to alanine. Okay? So it's a mutation from, let's just say, fennel alanine to alanine, and then they ask, what is the contribution made by that fennel group? When they do that, they find that only these red residues are actually contributing binding energy. Mutating these other residues to alanine had no effect on the binding of growth hormone for its receptor. And furthermore, when they zoom in, they saw this beguiling hotspot of binding energy. These red residues look like this, and notice that they have all this hydrophobic stuff in the middle, the green, and then this is ringed by hydrophilic functionality over here. It's kind of like a core sample of a protein. All right, so this teaches us stuff about how proteins work. I've been using it in this class to tell you about how proteins work, but it turns out it also has a practical purpose, and this is one of the fun things about protein engineering, trying to engineer proteins that do stuff that they otherwise want to do. And I mean this is the kind of stuff that you find in your house. It turns out that proteases have been engineered using the techniques of protein engineering to develop better proteases. And I'm going to give you one example of this. So, subtle lysine is a protease that had some modest specificity, but was pretty broad spectrum. And the goal was to engineer a new variant of subtle lysine that can go out and cleave apart any protein. And the reason why you'd want to do this is you'd want to have a protease that can chop apart proteins that form stains on people's clothes. Okay, so you get a drop of blood on your shirt or whatever. You definitely want to get that removed, right? So, protease is found in these products, go in, and just literally Edward Scissor hand style, start clipping apart the proteins that would otherwise stain the clothes. So, the key was engineering the pockets that bind to the side chains of these proteases and basically opening them up, giving it more space. That extra space means that this subtle lysine variant can then accommodate a diverse array of different proteins. Okay, so, you know, maybe this is one day, you know, chewing apart some pea soup that lands on your jacket. The next day, it's, you know, I don't know, you know, chewing apart some other protein that it happens to find a stain for. So, this is powerful and this is used in a wide variety of different products. This is the kind of stuff you don't hear about, but it's actually superbly useful. All right, the problem is I've told you the good parts. I've given you the greatest hits. Turns out, for every greatest hits, there's probably a dozen total failure wannabes that are lurking in the shadows. And the reason for this is that most mutations take a perfectly good protein and turn it into trash. Okay, so, most mutations make proteins less functional. And here's a really cool example of this. In this example, this is staflacoccus nuclease, an enzyme that digests DNA. And in blue, these are amino acids that cannot tolerate mutations. Every single one of these blue residues totally resists any substitutions. In yellow, those are the few that allow some changes. They can tolerate mutations. And you'll notice there aren't that many yellow residues here. The vast majority are blue. So, random mutagenesis does not work so well. It takes a lot of time, which is why, you know, natural evolution doesn't happen so quickly either. So, scientists have come up with all kinds of more powerful ways of introducing mutations. We've talked about them in class. We've talked about, for example, a ligonucleotide directed mutagenesis using quick change, PCR. We've talked, I believe, about conical-based mutagenesis. So, there are ways of focusing the mutations into particular regions of protein space and then using evolution as a powerful tool, say, using page display, for example, as a powerful tool to evolve new functions of proteins. All right. I'm going to skip this. I want to talk to you next about carbohydrates. That's all I have to say about proteins. I can talk about it for an entire class. It's one of my all-time favorite topics. But, you know, I have other things I need to talk to you about. So, we're going to be switching gears. We're now on Chapter 7. We're going to be talking about carbohydrates. If you're not familiar with carbohydrates, if you are a sophomore organic chemistry class, do not cover carbohydrates. I need you to go back and review the chapter on carbohydrates in that textbook that you kept from Chem 51 or whatever sophomore organic chemistry class you took. Don't get too wrapped up in all the reactions. I'm interested in reactivity. And let me show you what I mean by that. So, carbohydrates are hydrates of carbon. We've already seen, for example, ribose. You've seen glucose before. But they all have this general formula of carbon with the same number of waters. Okay? So, over here, five carbons, and then five water molecules. Despite that rather beguiling simplicity, the truth is these things are darn complicated. These are, you know, baffling really to chemical biologists. Not all chemical biologists, but many chemical biologists, mind these, annoyingly baffling. It's really one of the frontiers in chemical biology is to better understand carbohydrates, their properties, their reactivity, their function in the cell, et cetera. And this is really a challenge, really, for us. They often have complex structures that are difficult to assign, for example. All right. Before we go any further, I need to introduce you to some important nomenclature that we're going to use that you must memorize. We're going to be referring to five-membered rings as furanose and six-membered rings, six-membered carbohydrate rings as purinose. These, if there are five carbons, we'll be referring to the carbohydrate as a pentose. Notice there are five carbons here. So you can have both a five-carbon furanose ring that's a pentose and also a purinose ring that is also a pentose. Okay. I hope now you're totally confused. Here I'm going to rescue you. You can have a six-carbon hexose that has a five-membered ring called a furanose or a six-membered ring. Okay. Makes sense. I hope so. That's the nomenclature we're going to be following. All right. This is one of those extraordinary slides that when I first noticed this, I was like, I can't believe this is true, but it is. Okay. It turns out that in the human body, there's only nine carbohydrate building blocks. That's it. That's it. That's the sum total. So it turns out that even some of the ones that you're familiar with aren't really found in oligosaccharides that are found on the surface of cell. For example, ribose. Ribose is not incorporated. Ribose is not listed here. Okay. So although there's only nine, you don't have to go out and memorize them. Okay. So don't bother learning all of the carbohydrate building blocks unless you're planning to go into carbohydrate glycobiology or glycochemistry, which incidentally I recommend. It's a really exciting frontier. There's cool stuff going on. Carbobamoronclyture. Unfortunately, we're kind of stuck with the old-timey conventions. There's just no way around it. So, and it turns out actually those conventions make a lot of sense. They make our lives easier. And so if you have a convention that's kind of annoying, but it makes sense and it's easy to use, so you're stuck. Okay. So for example, we're going to be referring to this structure as de-glucose. It has an R functionality at this carbon over here. I'll have more to say about that in a moment. Check out how much better it is to call it de-glucose than to call it this crazy name, which would be the IUPAC name. What is it with the Ds and the Ls? All right. So, straight up, most carbohydrates found in nature are Ds, but the truth is we have some Ls as well floating around. And so we have to know what this D and L business is. The D and L nomenclature refers to the carbon that's furthest away from the anomeric carbon. The anomeric carbon highlighted with the dot here is the carbon that has two oxygens attached to it. Two oxygens. Oxygen what? Oxygen two. Anomeric. And then if we go as far away from anomeric carbon as we can, we look at the stereocenter. If it's an R stereocenter, it will get the designation D. If it is an S stereocenter, it will get the designation L. Okay. And, you know, here's an example where it's not even next to the oxygen over here. So this one furthest away, R, therefore it's D. This one, check this out. Okay. It's not even part of the ring. The ring is over here. It's the way off on its own little crazy side chain. Doesn't matter. Okay. We still look at the stereocenter. That's furthest away from the anomeric carbon, in this case it's R, so therefore it gets the D designation. Make sense? Good. That's what we're going to be using. Now, here's the other thing. The carbohydrates, the anomeric carbon is subject to some change. Okay. So this anomeric carbon, as we'll see in a moment, could either have an alpha configuration or a beta configuration. The alpha and beta designation refers to its relationship with this DL setting carbon. Okay. Now, don't get confused. Don't panic. It's very straightforward. Alpha equals anti. Okay. So if this one is up and this one is down, up, down, Egyptian style, if these two are like this, then we're going to have, we're going to designate this as alpha. Okay. The D, again, comes from the carbon furthest away from the anomeric carbon. Okay. So anti is alpha. Same side is beta. These two are coming out towards us. They're both sticking up and therefore it's going to be called beta. Okay. Very straightforward designation. It does take a little practice. So try it out. You know, amaze your friends at cocktail parties. Whatever it is you want to do with this information, it will be useful because it's how we talk to each other. And you and I have to be able to talk to each other using a common language or else we won't know anything about what we're talking about. Okay. I won't know. I won't be able to listen to you. You won't be able to listen to me. So anyway, this is the nomenclature we're going to use. It is essential that you learn it. All right. There is this notion of an anomeric effect. The truth is it's very modest. So I'm going to skip it. To really understand carbohydrate reactivity, we first have to talk about the reactivity of a hemiacetal. And so it turns out that carbohydrates oftentimes are found inner converting between a hemiacetal configuration and an open chain configuration. And I can offer you two different mechanisms for this interconversion. In one mechanism, we start with acidic conditions and protonate the oxygen of the ring. Okay. So you protonate here and then electrons bounce, bounce, giving us neatly this aldehyde open chain configuration of the ring. Conversely, we do the same thing under basic conditions. But this time, we first deprotonate the hydroxide, kicking electrons, bouncing, bouncing, opening up the ring. Okay. And there's a second arrow in both cases. That second arrow just refers to a simple proton exchange. Not even worth our time talking about it. What this tells us though is that no matter what, all carbohydrates are susceptible to forming their reactive aldehyde form. Okay. Notice the hemiacetal has the aldehyde all bundled up protectively. Right? The aldehyde is hidden away. But after under either acidic or basic conditions, the aldehyde gets exposed. And aldehydes are super-duper reactive. They are electrophiles. This is bad news. This is why Coke, you know, well, I can say a lot about Coke. But this is why sugars in general are not so good to have floating around our bloodstream where aldehydes like this one can find reactive functionalities, reactive nucleophiles and start going to town and forming all kinds of uncontrolled products. All right. I'm getting off topic. Let's get back to the topic. I want to talk to you about stability in the ring. I've shown you the ring can come apart under both acidic and basic conditions. There is however a general rule of thumb that tells us whether or not the ring is going to come apart or not. In general, the least-strain ring wins. Okay. So if it's a choice of forming a purinose ring in the case of glucose or a furinose ring, this one is going to win. Okay. So six-membered, less-strained, then five-membered. This can form neatly, a nice chair configuration. This one can form an envelope, but still it's not quite as good. The seven-membered ring can also form. Truth is, we never see this. Okay. That sugar stuff that you ate with your, you know, with your sugar pops this morning for cereal or whatever, none of it was in this seven-membered ring. We never see this. This thing is super-duper-strain and it's also entropically disfavored. Right? This means that this carbon over here that's flapping around in the breeze has to somehow get in to, up close to the aldehyde carbon. And it's just too far away. So entropically disfavored, thermodynamically disfavored, all that adds up to bad news. Let's zoom in and start taking a look at examples of carbohydrates found in biology. And no one is better at this than the surface coatings of the TB bacteria, mycobacterium tuberculosis. Okay? So this is now a little schematic view of the outer surface of this bacteria. And check this out. This guy has decorated itself like, you know, Christmas in, you know, some country that really likes Christmas and lots of lights. Because this one has totally gone to town. It has Christmas trees, Christmas trees and lots of lights. The bacteria does this to escape the immune system for one thing. This stuff holds off the immune system at a distance. Okay? But notice each one of these little polygons is a different carbohydrate, a different monosaccharide. And notice that they're linked together into little chains and then these chains kind of branch off. And the linkages of these monosaccharides are through glycosidic bonds that I'll show you on the next slide. But what we find is abundant and highly diverse architecture. This just doesn't look like, you know, a smooth outer surface. This is an outer surface that's very rough, that's incredibly diverse. There's all kinds of different chains that are found here. And we're just going to have to do approximations to describe these things. This is going to make our lives miserably complicated. And it will make your life miserably complicated if you want to study tuberculosis. Because this gives the TB bug a really potent weapon for avoiding being tackled by the immune system. Okay, so again, oligosaccharides, extremely complicated, extremely complex. Here's another example. This is one example from the cell surface. Check this out, okay? So this guy has this long chain over here, all kinds of branchy points. Each branching point going off in different directions. But at the end, over here, there's a lipid. The lipid sticks the thing down into the plasma membrane. This is a spike that drives it straight into the plasma membrane and anchors it firmly. So this, all this branch stuff is like shrubbering. It's kind of waving around out there in space. And it's anchored firmly down here. Its feet are stuck firmly into the ground because it has this lipid tail that likes to be down in the plasma membrane. It has no choice but to be down there. All right, so this is the schematic diagram. But the truth is we're chemists. We're organic chemists. We don't like thinking about things of this polygon representation. Instead, Lord Elbus, we like to do things much more complicated. And instead, we like to look at them at the level of atoms and bonds. And so when we look closely, we see this crazy complexity where we have all kinds of alpha glycosidic bonds, beta glycosidic bonds, and a very, very complicated situation. So what's a chemical biologist to do? Okay, I'm showing you the worst case scenario. Things are super complicated. I want to step back for a moment. I'll try to simplify things so that when you see a complicated diagram like this, you don't get all daunted and scared. Instead, I just want to start off easy. We're going to start off slow. And then later, when you encounter these complicated things, they won't be as intimidating. So let's get started by talking about formation and breakage of glycosidic bonds. It's clear these glycosidic bonds are important, right? You know, this whole thing is stitched together by glycosidic bonds. Here's one, here's one, here's one. Every single carbohydrate here has a glycosidic bond. So that should be our first priority. Glycosidic bonds are an ether linkage between one saccharide and another, one glycan and another. When we look at the mechanism for the formation or hydrolysis, they all take advantage of the fact that the carbon, this anomeric carbon is adjacent to another oxygen. This sets you up for forming either an oxacarbene amion or an oxonium ion. I think it's safe to say, all textbooks show you this oxonium ion type of configuration. And it's not exactly wrong, but it's not exactly right either. Instead, the truth is somewhere between these two cases, these two extremes. On the one case, we have a carbocation. On the other case, we have something that's even more disgusting than a carbocation, which is an oxygen bearing a positive charge, where oxygen being electronegative doesn't like having that positive charge. In any case, this intermediate sets us up for either hydrolysis or for a new alcohol to attack giving us formation of either a hydrolyzed glycosidic bond or a new glycosidic bond. Okay, so in every case, we're going to kick off either a hydroxide down here or alcohol, and that's going to set us up with some positively charged intermediate that could then be attacked. Okay, and it should make sense to us that this is going to be attacked. We've seen the mechanism for this hydrolysis before, and I'd like to remind you of it. We saw it when we talked about lysozyme, a glycosidase enzyme. In that case, what I emphasized to you was that the nucleophile that was attacking was going through kind of an SN2 reaction, right? It turns out it's somewhere between SN2 and SN1. In other words, the alcohol that's getting hydrolyzed, the alcohol functionality that's getting hydrolyzed, kind of steps a little farther out the door than nucleophile attacking, okay? So what I'm showing you is I'm showing you how we chemists like to represent intermediate cases between SN1 and SN2, where we show a super long bond here, and it kind of implies that we're going to have a little more positive charge down here, which incidentally makes it all the more attractive for negatively charged nucleophile to come driving in, okay? And notice too that I'm showing you the substrate distortion that was the hallmark of lysozyme's functionality. Again, you start with this wonderful little cozy chair, and the chair gets torqued physically, and doing that sets you up for this neat backside displacement of the SN2 reaction, okay? So again, lysozyme, the Pac-Man of chemical biology twists this chair and forces it into this boat confirmation or twisted chair confirmation, setting up this nucleophilic attack and making this reaction possible. I showed you one example of lysozyme. It turns out there are many others. There are many glycosyl hydrolysis, and it turns out we classify them as either inverting or retaining. An inverting enzyme, let's just start over here, an inverting enzyme converts an alpha atom into a beta atom, and a retaining enzyme keeps whatever stereochemistry was there. So if it's beta stereochemistry to start, you finish with beta stereochemistry, okay? So there's two possibilities here. They have two distinct mechanisms. Beyond saying that, I'm not going to get too wrapped up in this. We've talked about this before. All right, let's move on. So we've talked about how the fundamentals of forming a bond, we've talked about the fundamentals of breaking one of these glycosidic bonds. I now want to talk to you about why it is that this matters in terms of diseases. I've already shown you tuberculosis. Unfortunately, unfortunately, I don't have a great way that tuberculosis can be cured using hydrolysis of glycosidic bonds. That's a frontier. Maybe someone in this class will be able to solve that, which would be really cool. Instead, I want to talk to you about the common cold, okay? Which I realize, and you realize, is one of those unsolved challenges, right? You know, it's practically a swear word to say, why don't you get a cure for common cold? Why don't you do something useful with your life? As though it would be so easy. So here's the closest that we've come. There's an enzyme called neuraminidase that is a key enzyme in the life cycle of influenza, the virus that causes flu. This enzyme helps to release the virus from the cell surface of flu-infected cells, okay? So here's the host cell. Recall that enzymes parasitically take over the machinery of the cell to produce new zombie copies of themselves. And then the enzyme, or the virus, buds on the surface of the cell. And after it forms a fully formed enzyme, it needs some way of getting off, of being released. And so this enzyme neuraminidase cleaves the carbohydrate that has it firmly held in place. So inhibition of neuraminidase has been a key target for influenza inhibitors and therapeutics, ever since I knew what the term chemical biology means. Which is a really long time. And fortunately, we don't have any great solutions to the problem. But let me show you the best that we have. Okay, so the best that we have are things that kind of look like the carbohydrate that's getting cleaved by neuraminidase. These are substrate mimics. We've seen substrate mimics before, right? And P was like ATP, except it wasn't, okay? So here it is, a compound called xenomivar that actually licks a lot like sialic acid, especially if you squint like this. When I squint at it, it really does kind of look like sialic acid. And that's good because by licking like sialic acid, it can fit neatly into the enzyme active site that evolved to bind to sialic acid. And then here's another one, another one called aceltomivar, that is actually given to patients as a pro drug. Okay, so earlier we talked about pro enzymes. I don't mean earlier today, I mean back on last Tuesday's lecture. We talked about pro enzymes, this concept of an enzyme that's then cleaved apart to expose its active fragment. Here we're seeing a pro drug. We have to hydrolyze, or the cell has to hydrolyze this ethyl ester functionality and free up the carboxylic acid in order to have a functional drug. In the absence of the esterase, the enzyme does not work. Okay, but fortunately esterase is our dime and dozen. And this strategy is a very effective one for hiding away negatively charged functionalities that need to be present to make the drug function. Negatively charged functionalities however affect things like the ability of the drug to pass through the hydrophobic plasma membrane. And so this is a way of making that carboxylate a little more greasy and a little bit more readily able to pass through hydrophobic passages. Okay, here we see a zoomed in view of the active site of neuraminidase. And I'm not going to go through the mechanism because it has a mechanism similar to other glycosyl hydrolases that we've talked about. But check this out, this compound over here, the carboxylic acid of aceltamavar bound in green to the active site. And look at how beautifully positioned it is. Okay, I mean I just want to take a moment just to gaze in awe at this beauty. Sorry, I can't help myself. Check this out, there's this positively charged arginine precisely poised above the negatively charged carboxylate that's this after the ethyl ester is hydrolyzed exposing it. And you can see that's absolutely crucial, right? We have one, two positively charged functionalities, two quantity functionalities from arginine that are perfectly poised to grab on to that negative charge. So if you don't have negative charge here, the thing is not going to bind. Beautiful stuff. Okay, so chemists have been working on this for a while. Unfortunately, our best shots are on drugs that you take a day or two after you get infected and they shorten the length of time that you have influenza. The real problem is we're not so good at recognizing when you have influenza. If we had a way of knowing, yeah, you have a couple of viruses in influenza that are going to expand and then give you a full blown, you know, a snotty flu in a day or so, we would be really effective at treating it, but we don't. And to me that suggests a need for better diagnostics. All right, let's change gears. I want to talk to you more about oligosaccharides. We have to talk more about nomenclature. These things are getting complicated really fast. What we're going to be doing is we're going to be referring to the attachments, the carbons that are attached to each other in parentheses over here, and then we'll have a three letter abbreviation to designate the monosaccharide, the glycan that's being attached. So for example, this is sialic acid over here that has an alpha configuration. Notice it is anti, alpha being anti, and it has an alpha configuration, and then it's a linkage between carbon 2 and carbon 3, carbon 2, carbon 3, to a galactose functionality linked to a glucose functionality, or actually this one, an anacetial glucose functionality. So things are going to get complicated quickly. Don't panic. Don't get all worked up about this. Especially don't spend any time memorizing all this stuff. Rather, I want you just to be comfortable with the concept, and familiar enough with the concept that it doesn't throw you a curveball. All right. I want to talk to you about how it is that these long chains of carbohydrates, of oligosaccharides get formed. Typically, the glycosyl transferase class of enzymes uses a diphosphate base as a glycosyl donor. Okay, and by base I really mean like a DNA base, or actually an RNA kind of base. In fact, they use a UDP, or sorry, it's a GDP variant of the starting material as a way of activating the starting material. Recall that phosphates are natures, tosylate, or meslate, it's natures leaving group. And so this enzyme, fucosyl transferases, starts with a fucose. The fucose though is attached covalently to this diphosphate. The diphosphate is going to be a good leaving group, and that sets you up for forming neatly this glycosidic bond over here. Okay, so what's going on? Hydroxide is attacking this anomeric carbon, and then the diphosphate is stepping out the door. This again is that kind of hybrid that we saw earlier. Hybrid SN2, SN1 reaction, where the GDP functionality is starting to step out the door a little more quickly than the hydroxyl is acting as an SN2 nucleophile to attack. But it's kind of a hybrid of the two, similar transition state to what we've seen earlier. Let's get into the mechanism a little bit more. Here's a picture of the enzyme that actually does this reaction, and then here's what it looks like in the active site. Pure beauty, isn't it? In this case, we have the hydroxyl neatly poised above this anomeric carbon. Notice that this guy is set up neatly for backside displacement. You know, all of the orbitals are neatly in line. This Sigma star orbital is precisely positioned to have the lone pairs in here wing their way in nucleophilically and attack this anomeric carbon. I love this kind of stuff. It is just pure beauty in action. Okay, now what is this good for? What this is good for is it sets us up for building really complex structures out of carbohydrates, and some of these complex structures are kind of familiar to us. This top one is cellulose. Cellulose is nothing more than glucose. You know, the sweet stuff that tastes so good? Yeah, it's glucose except it's joined by beta glycosidic bonds. Okay, so the stuff of this table, that glucose stuff, that's actually, it's actually, this is totally not, the cellulose in wood is tasty glucose. The problem is, of course, and the reason why cellulose doesn't taste so good to us, is that we have no way of hydrolyzing these beta linkages of glucose. Instead, we're very adept at hydrolyzing the alpha linkages of starch, okay, which is shown here, starch forms these helices that kind of wind around each other as a consequence of having this beta linkage, okay, so it's a beta glycosidic bond. Differences between alpha and beta could not be bigger, right? One hand, starchy things taste good. That's the potato chips, cellulose things, not so good, right? You start chewing on a 2 by 4. Let me know how good that tastes for you. Okay, so these are ubiquitous forms of carbohydrates that are found in nature. In fact, the majority of biomass found on our planet is stored in cellulose or in starchy forms, and so this is totally ubiquitous. Another very ubiquitous polysaccharide, and I'm calling these polysaccharides because they're just really long chains of glycans. Another very ubiquitous polysaccharide is chitin. So chitin is the outer exoskeletons of insects, of shrimp, of crustaceans, right? And I don't know about you, but when I eat shrimp, I'm one of those people who usually peels off the peels or I spit them out, but I don't like chewing on them. I don't like eating them. I do have friends that for whatever reason, they eat the shrimp whole with the peels and everything, the truth is they don't get any nutritious benefit out of those peels, okay? Because chitin is indigestible to us. It's actually the as a nitrogen analog of starch. It actually has the same beta linkages, but it has this inacetyl functionality that replaces a hydroxyl of glucose. And so although this is exoskeleton of arthropods, it's not digestible to humans. We do not have functional chitinases in our stomachs. This is really too bad because actually this would be a great source of energy. And it's very likely that humans, our human ancestors, the not, you know, homo sapiens humans, but the way distant ancestors to us, actually probably were capable of digesting these sorts of shell exoskeleton things. And we can see evidence of this. When we look in the human genome, we can see nonfunctional chitinases that are still carried along, which again suggests a diet that our ancestors ate that was very rich in bugs. So it's very likely we're eating all kinds of buggy things, and maybe we had a functional chitinase that would allow us to digest the chitin and get energy out of it. All right. More, these are, now switching gears now, I want to talk next about oligosaccharides. Oligosaccharides we're going to define as being of much more determinant length of the more determinant structure where polysaccharides are a little less determinant. Here's the first example I'm going to show you. This is the oligosaccharide that's found in your knee joint, okay? This is the oligosaccharide that lubricates these joints and makes it possible to have bone-on-bone stuff without grinding apart the bones after 30 years, okay? So hyaluronin is synthesized as a continuous extrusion on the cell surface, the cells that are found near this joint. It's synthesized by chondrocytes, and it forms this weird gel-like cushion, okay? So in this case, what's happening is we're starting with a UDP precursor, and much as like what I showed with the GDP precursor on the previous slide, the UDP is just a superb leaving group. And so UDP steps out the door and a new glycosidic bond forms, and this basically gets extruded by the chondrocytes straight off into this joint region, and this gives us a nice hyaluronin gel that cushions the joints. What is it about this that cushions joints? Okay, well, I don't think it will come as any surprise to you to find out that this stuff is very water soluble, abundant opportunities for hydrogen and bonding, hydrogen bonding here, hydrogen bonding down here. The carboxylate functionality is very nice as well, lots of hydrogen and bonding. The carbohydrate functionality also pushes the strands apart from each other. This makes a nice cushion, it makes a nice little watery layer that soaks up the water and is very stable. All right, last thought of the day, glycosylated proteins. So I've been showing you carbohydrates that are kind of free floating. What we find though, when we look carefully at cells, is we find the shrubbery of glycosylated proteins all over the place on the outer surface of cells. And when we come back next time, we'll be talking about all of the abilities that this endows cells with. So why don't we stop here, look forward to talking to you when I get back from Rio.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Instructor: Gregory Weiss, Ph.D. Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:02:04 Enzyme Functions 0:06:10 Serine Based Proteases 0:10:44 Protein Based Inhibition of Proteases 0:13:10 Covalent or Mechanism-Based Protease Inhibitors 0:15:02 Inhibition of Serine Esterases 0:17:07 Enzymes Use Co-Factors (Vitamins) 0:21:31 The Origins of Stereospecificity in Alcohol Dehydrogenase 0:24:09 Pyridozal Phosphate (Vitamin 86) 0:27:29 PLP - Catalyzed Transamination 0:29:29 Protein Engineering 0:36:16 Most Mutations Make the Protein Less Functional 0:38:17 Carbohydrates 0:44:30 Hemiacetal Reactivity and Formation 0:46:33 Glucopyranose is the Most Noteable Ring Configuration 0:47:51 Oligosaccharides of the TB Coat 0:51:29 Oxocarbenium Ions as a Key Intermediate in Hydrolysis of Glycosidic Bonds 0:53:19 Mechanisms of Enzymatic Hydrolysis 0:54:58 Commonalitites in Glycosylhydrolase Mechanisms 0:56:03 Neuraminidase: Key Enzyme in Influenza Release from Surface to Cell 1:01:06 Oligosaccharides 1:04:22 Polysaccharides 1:08:04 Hyaluronan: Oligosaccharides in Joints 1:09:57 Glycosylated Proteins
10.5446/18872 (DOI)
We're going to pick up where we left off. Hopefully you watch the podcast lecture on Tuesday. I want to pick up what we discussed on Tuesday, looking at protein function. Specifically, we're going to be talking about how enzymes work. How do enzymes catalyze key reactions in the cell and so on. We're going to start by talking about some measures of enzyme activity. And then we'll talk about regulation of enzymes. And then we'll get into the mechanisms. And then we'll close with mutagenesis and engineering. So what we talked about on Tuesday is that proteins have a wide range of roles, structural, binding, and catalytic roles. In the example of structural, we saw, for example, collagen. We saw how collagen gets organized into these complex assemblies that make it possible to have extremely strong bones and things like that. We also talked a little bit about Titan, a muscle protein that makes muscles capable of being pulled and stretched without breaking, without snapping. We also talked about binding. The example we saw there was FKBP binding to FK506 and rapamycin. And then finally, we're now up to the part where we can start talking about catalysis. So protein function has at least three major roles inside the cell. Structure, binding, and catalysis. And when we talked about the repeat proteins, for example, we saw a great example of binding. And I want to just very briefly, we'll take a look at that at a moment, but I just want to emphasize something based on some questions that I got during my office hours. We talked about how noncovalent receptor ligand interactions can be described by dissociation constants and an on-rate and an off-rate. We're now at the point where we're going to start talking about this, Michaelis-Menten constant, KM, which is an analog to KD. And something that I touched on at the very end of the lecture is enzymes work by lowering the transition state enzyme of the reaction, energy of the reaction. Doing this is in essence how you catalyze a reaction. So lowering the activation energy necessary for that reaction to take place, you're increasing the speed that the reaction takes place by doing this. And enzymes do this by binding to the transition state and stabilizing it. By having counter ions that stabilize charge functionalities in the transition state, the enzyme can lower the transition state energy. And this is surprisingly effective. We're going to be seeing some examples of that today. So we're going to be talking today also about how this catalysis by enzymes is coupled to the motion by the enzyme. And I'll show you some examples of this. And what we're going to see is that enzymes force the substrate, the starting material, into confirmations that favor formation of the product. And by doing that, that accelerates these reactions. Okay, and very briefly, I just want to touch on one point about the repeat proteins. Something I didn't mention but probably should have when I talked about the repeat proteins. This is an example of a repeat protein. This is an anchorin repeat. Notice that it has a series of helix, turn, helix, turn, helix, turn, helix, turn, helix, turn, helix, turn, helix. So those repeats, each one of these loops and then helix, loop and then helix. And notice that they're all stacked on top of each other. Each one of those is one example of a repeat. And this anchorin repeat has a whole bunch of these lined up. Okay, that's where it gets the name repeat protein. Make sense? And then similarly with the leucine rich repeats, in this case, it's a helix, strand, loop, helix, strand, loop, helix, strand, loop, et cetera. Okay, but it's a series of repeating motif, repeating structural motif. In fact, these are actually individual domains. They fold modestly well on their own and they can be shuffled around to a limited extent. Okay, any questions about the topic of protein structure? And for that matter, any questions about anything that was covered on Tuesday? In Tuesday's lecture. All right, well I think in that case then we're ready to move on. We're going to talk next about catalysis. Today's discussion is going to be pretty high level. I'm going to be telling you stuff that's actually not in the book. And in fact, actually it's only found really in the frontiers of the literature in chemical biology. So don't hesitate to interrupt if at any point I start to lose you. Okay, it's better if you interrupt me early on than if I go further down the road and then you're totally lost. Okay, because the truth is this lecture will be the only time that you're going to be able to find this material I'll be discussing. And I think it's very, very important. It's actually the very frontiers of chemical biology. Okay, so I want to talk to you today about catalysis. And last time I showed you that non-covalent binding consists of ligands hopping on to some binding site in a receptor. And we describe this non-covalent interaction using an equilibrium constant, a special equilibrium constant called a dissociation constant that quantifies the ratio between unbound up here to bound receptor ligand interactions. Okay, so this is pretty straightforward stuff. The only difference with catalysis is that we're going to have this similar receptor ligand interaction. But upon binding to the ligand, the enzyme in this case instead of the receptor, and for that case instead of the receptor, the ligand is going to be transformed, converted into some new product. So it's a similar sort of process. So if we understand non-covalent interactions, then we can also understand catalysis by enzymes. Okay, I'm skipping ahead, skipping, skipping. We talked about this already. All right. This is, I believe this is where we left off, right? Is this right? Okay, so where we left off again is this idea that receptor ligand interactions are governed by some binding. In the case of enzymes, the substrate binds and then some catalysis takes place to convert the substrate into a product. S stands for substrate or starting material, and that gives us P, which is the product, and then the product has to dissociate. I think I've talked about this, right? This was covered. Right? I did cover this. Okay, good. Right, okay, so I think I'm actually over here. Okay, good. So here's a typical reaction scheme I already talked about on the previous thing. So the formation of this enzyme substrate product, this mechalosmentin complex, this is sometimes called the mechalosmentin complex, this complex here of the activated enzyme bound to the substrate in a way to catalyze the reaction has an equilibrium constant, and in the same way that the KD was proportional to the rate constants for the dissociation, the K off to K on, in the same way the mechalosmentin equilibrium constant is equal to the sum of the off rate and the K cat. So that's like getting to the ES and either going backwards or forwards. That's the rates going in either direction for destruction of the complex versus forming the complex, meaning the rate constant for K on. This KM resembles the KD for non-covalent binding interactions, and so it's useful for us. The strength of the KM tells us something about how avidly the enzyme is going to grab onto the substrate and how quickly it's going to form this ES complex. So that, this KM is actually a useful number. It tells us something about the conditions inside the cell because the enzyme has to evolve up to a sufficient affinity for its substrate so it can grab onto the substrate inside the cell. If the affinity is too low, then the enzyme substrate complex will never form and the enzyme will never catalyze a reaction. On the other hand, if it evolves to the point where it's super-duper high, maybe that's not so useful for the cell because maybe the enzyme then will be a little too active. So these evolve up to the maximum ability that's necessary for the cell. That's required for the conditions found inside the cell. And oftentimes, for example, we can engineer enzymes to have very different KMs simply by tinkering with their active sites. And I'll talk about that more in a moment when we talk about protein engineering. Okay, so if we look at a dose response diagram, this is similar to the dose response diagrams that I showed on a previous slide on Tuesday. Instead, on the y-axis, we have our initial reaction velocity and the concentration of the substrate, the point of inflection here is going to be roughly the KM, this equilibrium constant up here. For that matter, this point of inflection also tells us where 50% of the maximum rate of the enzyme is going to be. At the very highest concentrations of substrate, the enzyme is going to be running flat out. Okay, so that's like as fast as the enzyme can possibly go. It would be the equivalent of giving the sprinter maximum oxygen, maximum glucose. Everything he or she needs to do to run as fast as possible. So that's up here under maximum velocity conditions. Notice that this asymptotically approaches this V max value way up here. Okay, and so anything where it's an excess concentration of substrate that's called V max conditions and typically enzyme reactions that are run in the laboratory are run under those conditions. We always have an extreme excess of substrate typically. Okay, let's take a look at some KMs. They range widely. There's a wide range of possible KMs for enzymes. And here are some numbers over here. Now, like the KD, a lower KM value means tighter binding. Totally analogous to the dissociation constant. In fact, it has a very similar connotation. So what this tells us, for example, is that this enzyme here, cytochrome P450, binds benzopyrene with theory high affinity. This should come as no surprise to us. Benzopyrene is this big, flat hydrophobic molecule. And hydrophobic molecules in general aren't so soluble. So the cytochrome P450 in your liver is going to be grabbing on to the benzopyrene that you inhaled on your way to over here when you got behind that stupid shuttle bus that was, you know, dinking along at too slow a speed. Right? So you get behind the exhaust pipe of that shuttle bus and you start inhaling unburnt benzopyrene. So this cytochrome P450 in your liver is right now, as we're speaking, grabbing on with great affinity for these benzopyrates. On the other hand, there are some enzymes that don't have to grab on all that well to their substrates like aconitase. This is a key enzyme in the metabolism of glucose. And its substrate, citrate, is found at high enough concentrations that the enzyme doesn't really have to evolve to a very high affinity. So this gives us sort of a crude measurement of what the concentration of the substrate is in the cell. Right? So what we know is that there's probably not a lot of benzopyrene present, but there's probably tons of citrate present, hence the need for lower affinity. Now, let's also take a look at some K-cats. So this is the equal, this is the rate constant for the decomposition of the macalus menten complex, the ES complex that is now being broken down to form enzyme product. Okay? Make sense? Okay. So in this case, again, there's a wide range of K-cats. And this tells us something about how hard the reaction is to catalyze. Harder reactions in general have lower K-cats. But it can also tell us something about the evolution of the enzyme. Enzymes in general evolve up to the required function and really don't go past that. Okay? There's really no evolutionary drive. There's no selection mechanism that drives the enzymes to be perfect unless they need to be perfect for some particularly crucial function for the cell. So here's one example of a really crucial function for the cell. The enzyme catalyze breaks down hydrogen peroxide into oxygen and water. This is a crucial reaction. Hydrogen peroxide creates a substantial burden on cells. This is a strong oxidant. And oxidants run around and wreck havoc on cellular machinery. And so for this reason, cells have evolved pretty sophisticated mechanisms to very quickly break down such oxidation products. And catalyze has a K-cat of 100 million. So this is a really, really fast catalytic reaction that takes place. And then a slower reaction would be a protease. Protease is of course hydrolyzed amide bonds. I believe we've seen these before. And their K-cats are much lower likely because this reaction is a little more challenging and a little less favorable thermodynamically. And also for that matter, it's not as critical perhaps for the cell. Okay. Questions so far? Good. Okay. So these are the numbers that are going to underlie our discussion as we start talking about the properties of enzymes. Okay. These are the same numbers that you learned about in like Bio 99 or 98, whatever biochemistry class you take here at UC Irvine or elsewhere. These numbers are kind of the vocabulary that my biochemistry friends talk about when they talk about enzymes. Okay. Now the truth is as a chemical biologist, I don't get too worked up about these numbers. I'm more interested in understanding the atoms and bonds basis for how the enzymes work. And so I guess the best place to start would be let's start with the perfect enzyme. What would be the enzyme that really can crank, that could maximize its ability to turn over a reaction? And then we'll look at some specifics at the level of atoms and bonds. So the very perfect enzyme, you might imagine, every time it forms this Michaelis Menten complex, the ES complex, then it goes immediately to Kcat. So it forms and then boom, it's over to the Kcat. And it just immediately gets converted, converts the substrate to the product that happens instantly. On the other hand, the perfect enzyme is not going to have any off rate over here. This off rate represents lost opportunities. This is the sea, the wasteland, coulda, shoulda, maybe shoulda. Okay, right? This is the chance that the enzyme missed. So instead of going to product, the enzyme goes backwards. And so this off rate over here is miserable and inefficient for an enzyme. So the perfect enzyme is not going to have an off rate. And so the perfect enzyme, you can basically imagine Koff being 0. And if we have that, then we can imagine rearranging our KM equation shown a couple of slides ago, such that Kon equals the ratio of Kcat to Km. And again, notice that these little k's are indicating rate constant and the big Km is for equilibrium constant. So the very best enzyme will have an on rate that's diffusion controlled. In other words, it's limited by the amount of time that the substrate, Brownian motion style, eventually bounces its way to the active site. That should be the slowest step for an enzyme that's perfect. And we've talked about this before, but that rate of diffusion has a speed limit of 10 to the 9th per molar per second. It can't go any faster than that. That's a physical law. It's like the speed of light. You cannot exceed that. Just because it takes a little while to bounce around through all that water and other stuff that's present in the cell. Okay, make sense? We'll take a look at a moment at an example of an enzyme that's far from perfect and we'll start to understand what its sources of imperfection are. So before we do, let me just give you a little table that I really like that shows us and helps us organize enzymes. This shows us the rankings of enzymes in your proteome. Okay, so this is a listing from most common to least common. It's like a greatest hits of the seven categories of enzymes that are found in the human proteome. The most common enzymes by far are the hydrolysis. These are the enzymes that introduce water as a way of breaking a bond and we're going to see a couple of examples of this. We'll see examples of glycosylases and proteases today. We've already seen examples of nucleases. That was stuff like RNAase, right? Remember when we talked about RNAase and it was inhibited by Dipsy? This is a similar sort of thing. Can someone help this guy out? Thank you. Okay, transferases, next most common, second position. These are examples of enzymes that transfer functionality from one spot to another and we're going to look in detail at an example of a protein kinase today and then later in the class we'll look at a glycosyl transferase. Oxido reduptases, this is like the enzyme cytochrome P450 that takes the benzopyrines, introduces an epoxide and oxidizes the benzopyrines. You all remember this, right? I showed you the benzopyrine a couple of slides ago but earlier in this quarter when I was talking about cigarette smoking, I showed you how the benzopyrine that looks like this rather innocuous flat structure gets converted into an epoxide and then slips into the pi stack of your DNA and alkylates the DNA. And so these are actually very common enzymes, these oxido reductases because they're important for removing toxins. Dehydrogenases are another one that's very common and perhaps we'll get a chance to see this one today. And then finally we get down to the ligases. These are enzymes that spot weld together to functional groups such as attaching ubiquitin or DNA to something. Dysomerases are used to convert substrate into some related isomeric product. These are things like epimerases. The synthetases we've seen before, we talked about aminoacyl tRNA synthetase. This was that gargantuan complex that read out the anti-codon and the various modifications of the tRNA to make sure that the correct amino acid was being attached to the tRNA during aminoacyl tRNA synthesis. Okay, so and then the final one, the liases are doing things like decarboxylation. These are actually breaking carbon-carbon bonds in dramatic fashion. So these are aldolases that are doing aldol reaction, et cetera. They're either breaking or making carbon-carbon bonds. So fairly, I feel like we've seen many examples of these different enzymes this quarter. So now I can go through and just talk about the ones that are really important that we haven't seen yet, okay, such as the kinases over here. And I believe that's where I'm going to start. Yes, in fact it is. So it turns out that kinases have a common fold that consists of a lower domain down here and then a larger lobe up here. The active site is indicated where these Van der Waals spheres are. This is ATP. So kinases take ATP and transfer the gamma phosphate of ATP to some sort of hydroxide recipient. Okay, that's generically what they're doing. When we talk about the gamma phosphate, ATP, adenosine triphosphate, ATP has three phosphate groups called alpha, beta, and gamma. The third one in the row is called gamma. And so that's the phosphate group that the kinases are going to transfer. So again, notice that these have a conserved dual lobe structure even though these have widely disparate activities. This is everything from a receptor tyrosine kinase over here to protein kinase over here. These do have very different activities. They phosphorylate different targets. And yet on the other hand, they all evolved to have very similar structures. Now, this class of enzymes, like all enzymes, can be inhibited by substrates, pseudo substrates that mimic the real substrate. So here's the structure of ATP, but in place of one of the oxygens of ATP highlighted in blue, we have a nitrogen. And this phosphoramide inhibits the kinase. Okay, so if you feed this phosphoramide to a kinase, to any of the kinases I showed on the previous slide, it's going to be game over for them. They're not going to be able to work because they're going to bind to this ATP analog. They're going to put it in a sloppy old beer hug, but this gamma phosphate is missing the oxygen. And missing the oxygen is the same as saying it's totally inert. And so this is going to basically be locked in the active site, in the brace of the active site, yet unable to transfer the phosphate group. And so the net effect is to shut down the enzyme. And this is a very effective way of killing enzymes. You basically know something about the mechanism. You make a tiny little modification of the substrate, and boom, it's game over for the enzyme. Okay, so you could do this also with sulfur or nitrogen. So I have shown you the nitrogen, but you could do that as well. And again, it sticks in the active site and inhibits the enzyme. This approach also works if you mimic the product. And a large number of enzyme inhibitors pursue one of those two approaches. Either mimicking the substrate, as shown here, or mimicking the product. Either approach works great. So let's zoom in. I showed you the structure, the bilobe structure of kinase. Let's zoom in and take a look at its active site. In the active site, there are, here's the structure of ATP. There are a series of conserved magnesium ions, these balls over here, that are bound to the phosphate groups of the ATP. The numbers here indicate an angstrom the distances. Okay, and these numbers are pretty low. Right, if you recall that a carbon-carbon bond is somewhere on the order of like 1.5 angstroms or so. These are pretty close in numbers, right? These, this magnesium is getting awfully close to this oxygen over here. These are cozy, cozy molecules and atoms. They like being this close. They like being this close because they have complementary charges, right? The magnesium has a plus 2 charge. The oxygens of these phosphate groups have negative charges. So they're attracted by salt bridges or coulombs interactions that we saw earlier in the quarter. So over here, there is the other substrate for this enzyme reaction that has a hydroxyl. I'm showing it to you with the hydroxyl deprotonated. And after a phosphoryl transfer, this oxygen of the substrate has now picked up a phosphate group. And notice that the magnesiums here are helping to stabilize that phosphate group, right? That they're lowering the energy of binding by forming that same sort of coulombic salt bridge that we saw earlier. Okay, now if we zoom in and take a look at the arrow pushing of, arrow pushing mechanism for this enzyme active site, what we find is that and not depicted on this previous slide, somewhere out here, there's a carboxylate side chain from aspartic acid. The carboxylate of this aspartic acid deprotonates the proton of the hydroxy of a serine for the substrate. And that sets us up with an alkoxide. Alkoxide being a superb nucleophile, it's negatively charged, can attack the gamma phosphate of ATP. And again, the magnesiums get in on the action. They're over here participating fully and stabilizing this negative charge of the phosphate group. That's crucial, right? You can imagine this reaction not going in the absence of those magnesiums, right? Because one negative charge is not going to want to approach a negatively charged phosphate group, right? The negatively charged alkoxide over here is going to be stymied in its attack. It's going to get repelled by this phosphate group. So the magnesiums are shielding the phosphate group and protecting it and preventing it from getting, from looking like a negative charge. And so that tees up this reaction very deeply. And then finally there's a collapse of this trigonal bipyramidal intermediate. And just very briefly, the structure around this phosphate looks like a trigonal bipyramid. Not so, anyway, that's interesting. And then there's collapse of this trigonal bipyramidal intermediate to give us our final product. Okay, so to summarize, the most important aspect of this is the notion that the magnesium ions are playing several roles to make this reaction possible. First, they're coordinating and stabilizing the transferred phosphate group as a Lewis acid. Okay, so that helps accelerate the reaction as a Lewis acid. Turns out that Kine's activity in the cell is very tightly regulated. And the reasons for this are perhaps not clear if you don't know much about signal transduction. And I'm just going to very briefly cover it today. Then in a future lecture, we'll learn quite a bit more about it. In the cell, there's a series of pathways that transfer information. And these pathways are controlled by transfer of Kine, transfer of phosphate groups to key residues and proteins. So Kine's play a really key role in kicking off various processes in the cell. So there's cascades of Kine's where one Kine's phosphorylates the next Kine's which phosphorylates the next Kine's and so on and so forth. Turns out that this process is very tightly regulated because you don't want your cells going wild. You don't want them to be doing uncontrolled cell division, for example. And so for this reason, the cell very tightly regulates Kine's activity. And I want to show you a couple of vignettes about this tight regulation because it's crucial to our understanding of how Kine's work. Okay, so here's one example. This is an example from the enzyme protein Kine's A, cyclic AMP regulated Kine's. And the way this works is there's actually a regulatory subunit shown here in blue that binds to the Kine's and actually has an inhibitory loop that blocks access to the active site. Okay, so does everyone see how this dark blue thing is? It's binding here and then it has like this long finger that fits into the active site and blocks the Kine's from binding to any substrates. This shuts down the Kine's and the ability to shut off the Kine's is crucially important. Okay, if you don't have this, the Kine's will be running around rampant wrecking havoc, turning on stuff, shutting off stuff, causing death and destruction and general mayhem. And I do mean death and destruction. Okay, these Kine's are that important. Now, when levels of a reporter molecule called cyclic AMP reaches certain concentration, this cyclic AMP binds to the regulatory subunit and causes the regulatory subunit to dissociate from the catalytic subunit approaching Kine's A. So these two molecules get forced apart as the blue one flips into a new confirmation. Upon binding to cyclic AMP, the thing changes its shape and it no longer has affinity for protein Kine's A. This is good, this is your protein Kine's A. It frees it up to go off and do the mission that it's wanted to do for its entire life, which is to run around the cell and phosphorylate anything that moves. Nearly anything that moves. It actually has a little bit of specificity, but for the most part, protein Kine's A likes to phosphorylate lots of different binding partners. Okay, this is a pretty promiscuous molecule. Now, here's the thing. Another way of regulating enzymes is to phosphorylate them. Okay, so this is one way where you have some regulatory protein that binds. A second way is to phosphorylate residues that are near the active site. Okay, so for example, this non-hydrolyzable analog of ATP, which has the nitrogen in place of oxygen. This is the molecule I showed on a previous slide. This tells us where the active site is. But over here are two residues that can be phosphorylated to flip on this map Kine's, this P38 gamma map Kine's. And so one of these is a tyrosine and the other one is a serine. And so this Kine's waits around until it gets to the, until it gets phosphorylated and at that point it goes into gear. Okay, so this is like an on-off switch for the Kine's. In the absence of this, the enzyme doesn't have the right confirmation. It doesn't have the right confirmation. It can't be a Kine's. Okay, so the phosphorylation of the Kine's puts it in gear, turns it on, and sets it going. Does this make sense? Any questions about what you've seen so far? Okay, that's the basics. I want to talk to you about the really neat stuff, the latest results in thinking about how Kine's is work and thinking about their motions. And again, this is kind of an abrupt departure from sort of standard material as presented in biochemistry classes. And it really represents the frontier in chemical biology. Many of the next experiments I'll be talking about were done, actually, with Miriam. She's one of the leaders in this area. Okay, so the thing is I want to talk to you about how enzymes work at a mechanistic level and how they actually work dynamically. How do they move when they do these reactions? So I should tell you that enzymes have great motions associated with their activities. This is unlike the case of conventional catalysis by organometallic complexes that you learned about back in Chem 51, okay? Or if you learned about in Chem 125. In those cases, the organometallic catalyst binds and perhaps it plays some Lewis acidic role, but it's certainly, we don't think about its motion. Okay, we don't think about it having some, you know, movement associated with it, some dynamics. Enzymes, it turns out, for the most part, almost all have very wild and very quick motions associated with them and a frontier in chemical biology is to understand how those motions impact catalysis. How do those motions allow enzymes to be effective catalysts? And so it turns out that if you get a big, you know, round bottom flask full of enzymes, you'll never be able to see those individual motions. And the reason is they tend to get blurred out, okay? So if we look at a large number of molecules, we'll never see the individuals in motion because all of the enzymes in that flask are going to be running along at different speeds and everything gets blurred out. Okay, so instead, in order to see individual motions, we have to look at single molecules. And to understand this a little bit better, let me offer an analogy. Let's imagine that I convinced, you know, the Orange County Marathon folks to reroute the marathon. So instead of being on Sunday morning, instead I convinced them to run it here Thursday at 10, 10 a.m. And I convinced them to start at that end of the classroom and send all the runners through the classroom from that door to this door. Okay, so now everyone's running through here all 10,000 runners. You can imagine what you're going to see is just a blur of pumping arms and legs, right? Everyone's going to be trying to get in and out of this classroom as quickly as possible. It's going to be total pandemonium. That's the situation when we look at an ensemble of enzymes. It's total pandemonium. It's a blur of arms and legs. We don't see anything. Everything gets averaged out. And by see, I mean using tools like spectroscopy, using tools that you're familiar with from your other classes. So over the last 15 years, there's been a revolution in this area of chemical biology or biophysics where scientists have started to look at individual molecules in isolation from all of their other friends and neighbors. Okay, so now instead of having the entire marathon coming through the classroom, let's imagine that I convinced each runner to come running through one at a time. So they're going to start over there and then come running through here. What you will see because each runner is isolated from all the other runners is you'll be able to see their arms and legs moving, right? Because now there's no blurring out effect, right? And furthermore, if you look closely, you'll be able to see some runners moving faster than others. Maybe one runner has a different stride than her neighbor, right? Because she comes running through and I don't know, maybe she extends her leg a little longer than the runner behind her. Okay, but if we convince them to be isolated from each other, then we can really start to get information about how they move. And that's the situation we find ourselves in when we start looking at enzymes. And so this area of single molecules allows us to look at conformational steps, at intermediates, and the kinetics and dynamics that underline enzyme function. And this again is a major frontier and it's a really exciting area to be involved in research. Okay, so everyone with me so far. Everyone understands this idea of looking at single molecules, right? Okay, good. I want to talk to you next about how we're going to observe our single molecules. There are a number of different fluorescence techniques for looking at single molecules. Patch clamping you may have heard of is a 40-year-old proven technology that works really well for looking at individual receptors. That works fine. In the last five years or so, groups here at UC Irvine have been at the forefront for inventing sort of a tiny little microphone that allows us to listen in to enzymes as they run. And it has some advantages over those other techniques. And so that's what I want to talk to you about today. Okay, so this is a collaboration between my laboratory and Phil Collins in the Department of Physics here at UC Irvine. And he's pioneered ways of building circuits that are based on carbon nanotubes. And this is an example of a carbon nanotube. This is basically a carbon graphite graphene layer that's folded up into a cylinder. It looks sort of like chicken wire. These wires though are amazingly conductive. Carbon nanotubes are really remarkable material. They have remarkable mechanical properties. They have remarkable properties for conducting electricity and for conducting heat. In terms of conducting electricity, all of the electricity is going to be flowing through the outside of the wire, through these bonds out here. And the electricity is not flowing through the middle of the wire. So this is unlike, for example, the copper wires that are used in wiring the walls, okay, wiring the electrical outlet over there on the walls. And this property makes the outside of the wire superbly sensitive to tiny little perturbations on its surface. And that's what we're going to do. So here's the way we do this. Students in the Collins laboratory and my laboratory start with silicon wafers that are about this big. We go to the engineering building across campus. The students put on bunny suits. And we build using photolithography circuits that look like this. There are these contact pads to which we attach wires. And then down here, you get down to these interdigitated electrodes that do not touch each other. So this is an open circuit. But somewhere out here, we sprinkle an iron militant of catalyst that catalyzes growth of one of these carbon nanotubes, a single walled carbon nanotube, across the wires to complete the circuit. And I false colored it in red over here. Okay. So now what we do next is we turn this wire into the world's tiniest microphone. We turn it into a device called a field effects transistor. It's not so important how that works. What matters is it's more or less the same as the microphone found in my cell phone. Okay. Same principle. And next, we're going to glue individual proteins directly onto the microphone and listen as they flap around. Okay. So runners, if we had runners running through here, you'd expect to hear the pounding of their feet, right? And you'd expect to be able to interpret the noise of their feet to tell us something about their stride, whether or not they're accelerating, whether or not they're slowing down, et cetera, whether or not they have funny heel strike, et cetera. Right? Makes sense? So we're going to do the same thing but with proteins. Now, I know that you're probably thinking proteins don't have noise. I have proteins all over my body and I'm not hearing anything right now. And the truth is the noise is very, very tiny. It is so tiny that it's very hard to hear. But moving charges do make noise. And I'll give you one example of this. If you're at the beach with a bonfire down here at Little Corona Beach State Park and you have this big bonfire going, you know how the wind kind of whips the flames and the flames make this kind of neat, whooshing noise? That sound of the flames moving is due to plasma in the flames that's charged ions in the flames that are moving around. So charged functionalities make noise as they get pushed around. And in fact, actually there's a speaker, a loud speaker. These are really expensive stereo speakers. On the order of like $20,000 a pair. They better sound good at that price. But it's actually based upon having a plasma that's moved around by a little magnetic coil. Okay? So you can actually hear charged ions moving around. And that's what we're going to do when we glue in the protein. So let me show you what it looks like when we have the protein glued in. This is the schematic diagram over here. We have the carbon nanotube. Here's the protein glued in. This protein is streptavidin. Familiar, right? Everybody in this classroom. And we have streptavidin conjugated to a tiny little dot of gold and that's shown here. Okay? So the little dots here, that's the gold attached to streptavidin. And then the horizontal lines are the wires, the carbon nanotubes. And the vertical are the electrodes over here. And you can see we're getting one, one, one, one, one attachment. Okay? So the breakthrough that Phil and I came up with, with our coworkers, our friends, the graduate students, was that we developed a way of attaching one and only one attachment each time to the carbon nanotube. This means then that we're isolating the enzymes away from all their buddies, which means then we can start looking at confirmations and intermediates. Okay? This is all a long introduction. We're going to get back to the kinases in a moment. Before I do, let me just set the stage. Here's the experiment again. We have the electrodes. We have the carbon nanotube. It turns out that, of course, you can't run your cell phone in water, so you can't run one of these tiny microphones in water. Okay? I think I did this experiment last week. I dropped my phone in a bucket of water. Actually, it was my cat's water bowl. And I pulled it out quickly enough, but it definitely did cause some damage. So electronics and water don't mix. I don't think it surprises anyone in the classroom. And so for what we have to do is we have, but all biology, of course, takes place in water. So this creates a dichotomy. And to solve this, what we do is we cover up all the electronics with a layer of polymethylbethacrylate. This is shown here in gray. And then we blast a little tiny window using something called an electron beam that just opens up a little tiny region of the carbon nanotube. And that's where we're going to do the experiment. Now all the images I've been showing you up till now are electron micrographs using electron microscopy. We're now going to get really small as we start imaging individual molecules of proteins. This is so small that you can't really see them very readily using electron microscopy, except if you use that trick that I showed on a previous slide where you coat things with gold, that was just streped out of the thing. So now to see these things, we now are going to have to use atomic force microscopy, where we're now getting down to really tiny resolutions of one nanometer or so. So here's what it looks like. This is one enzyme attached to the carbon nanotube. And this is an AFM image, atomic force microscopy image, just showing the windowed region, just showing the carbon nanotube that's exposed. This little blob right here is actually the enzyme attached. It has the right dimensions for that one enzyme. And now we turn on the microphone and it's lights camera action. Okay, at that point then we're ready to listen in. Okay, one more image. This is before and then that's after. And where it's circled, you can see very clearly the enzyme attached. Okay, everyone's still with me. Questions so far? All right, yes? What are the other? Oh, thanks for, yeah, thanks for these other blobs that are kind of like, those are other little enzymes that we can't get rid of. It's actually enormously hard to do these sort of images. It just turns out the proteins are kind of sticky. There might even be some salt crystals somewhere out here. So there's always some garbage-y stuff that we've been totally unable to get rid of despite a lot of work. It took a lot of work to even get images that are this clear. So. Could it be different when an enzyme gets separate from others the way they add the react and the ray in some? Okay, that's a good question. Okay, that's a really good question actually. Thanks for not asking that. Thanks for not being on the study section that asked that. Okay, so the question is, would an enzyme in isolation from itself behave differently than an enzyme that's next to its neighbors, right? In the same way that a crowded field of runners is going to run differently than if a solo runner. We don't know. I would like to think it's going to run the same. But it is a legitimate caveat and I thank you for that. I will have to think about that some more. Thank you. So why do we study that? Why are we not in fact, you know, well-increased in that? Yeah, so enzymes frequently aren't really, so it's true. Enzymes are really crowded conditions inside the cell. But it's not like there's like a thousand enzymes that are all doing the same thing crowded together. It's more like there's a couple enzymes that are kind of jammed in with hundreds of other molecules inside the cell. You know, so they're not all doing the same thing. So we can recreate that sort of thing. We can recreate the crowded conditions inside the cell and compare. We haven't gone into the experiment, but I'd love to do it. So, okay? Okay, let's get back to kinases. So again, this is protein kinase A. It still has the two lobes that I showed earlier. Here's the big lobe down here and here's the smaller lobe up here. And somewhere close to the, this upper lobe, Miriam, together with Issa Moody, a former graduate student in the laboratory, engineered a single cysteine. The cysteine, of course, has a sulfur functionality, a thiol functionality, and that allows us to attach site specifically the enzyme to the nanotube, to this microphone down here, through a pyrene. Okay, so pyrene is making yet another cameo in today's lecture. And as you might expect, the pyrene is going to py-py stack onto the carbon nanotube. Okay, because it's so hydrophobic, it's looking for a nanotube to stick onto. And it turns out the enzyme is very firmly held in place. This is like a very special kind of molecular glue that sticks these two molecules together. But notice that it's being held in a non-covalent interaction. And in practice, this thing is held in place for 10 or 12 hours. We don't see it coming off. It's really stuck in there very firmly. Okay, and again, here's another AFM image. And that little blob attached to the carbon nanotube is our enzyme. Okay, yeah, question over here. Sergio. How do we ensure that we have just one enzyme attached and not two in the liquid? Yeah, so we do AFM, this technique of atomic force microscopy, before we get started with the experiment. Just to make sure that we have one attachment. And we use this special technique called in liquid AFM. If we see two attached, we wash it away, and then start over. If we see zero attached, we start over with attachment. Okay, yeah, question over here. So when you have one and you use AFM, I'm sure you have one. Yeah. You know, like for example, a second one doesn't attach afterwards when you start listening. Okay, so afterwards, we don't add any more enzyme. So we have like purified buffer that has no enzyme around. So, and I know what you're thinking. You're wondering, well, what about this blob over here? What if it decides to get up and wander over here? It turns out these blobs are pretty firmly stuck down on the surface. And another thing is if another one attached, we'd hear that one running alongside it. Okay, right? In the same way that two runners would make different sound than one runner, right? You'd expect to hear a different rhythm. So we could detect that. We don't see it. Okay. Other questions? These are great questions, you guys. Yeah, over here. So you did turn on what enzyme you have based on the AFM image before? So we only add one enzyme. So we knew that we had PK around. Yeah, and then the AFM image confirms that we have one that's attached. Okay. All right. And question in the back, Anthony. How do you actually measure the noise? Okay, I'm getting to that. Okay. Let me show you. Okay. Anthony is impatient. Okay. So you guys have exchanged songs by SoundCloud, right? So everyone, you may raise your hand if you don't know what SoundCloud is. Okay, great. This is great. You guys are totally savvy. So when you do SoundCloud, you know how like there's only this little pulse that comes with it and it tells you about the loudness of the thing? I'm going to be showing you images that are like that, data that's like that, where we're going to be watching noise. Okay. And that's shown here where this is time on the X-axis and on the Y-axis, this is the current flowing through the nanotube. So that's going to be our noise. Okay. And enzyme by itself flutters around a little bit, but for the most part, it's totally quiet. Okay. So if you don't feed the runner, you know, some oxygen or glucose, you know, some bars or whatever, the runner doesn't start running. Okay. Enzymes are like that too. Okay. Unless they get substrate, this enzyme happens to be totally quiet. Some enzymes we find kind of randomly flutter around and our technique, we will not be able to pick that up. Okay. Now, when we add ATP, we see a new blip that appears. Do you see this lower blip over here? Each one of these corresponds to the ATP bound state. So we go from up here where the enzyme is open to down here where it's bound and then back. And we can measure how long the enzyme spends in this bound state and derive a dissociation constant, a KD for this enzyme, this PKA ATP interaction. When we do that, we find that that KD corresponds to what's measured in ensemble kinetics. That tells us that actually we're seeing something very similar to what's seen in the more crowded cases. Okay. Okay. Next, we wash away the ATP and then add in a peptide that's a peptide substrate. This has the serine hydroxide that's going to be phosphorylated by the gamma phosphate of ATP. And again, we see some intermediate confirmation as the enzyme goes from open to bound. Okay. And again, do you see how there's like more blips down here? The enzyme binds to its substrate, its peptide substrate with greater affinity. It grabs on tighter. It has a lower KD. Okay. Everyone's still with me. Make sense? Okay. Now, check this out. This is the really cool one. This is now the enzyme plus ATP plus the peptide. And now we see three levels. Okay. So this is one second over here. This is now two tenths of a second. I'm zooming in. And what we see is that these three levels correspond to ATP bound and then chemtide bound, which then gives us a catalynically committed confirmation. Okay. So when the enzyme starts working, it goes between open, intermediate, and catalysis. And I'm just going to call these one, two, three. This is the waltz of pKa. Okay. So here's the enzyme waltzing going along. One, two, three. One, two, three. One, two, one, two, one, two, one, two, three. One, two, one, two, one, two, one, two, three. One, two, stuck. Three. One, two, three. One, two, three. One, two, three. So that's an enzyme in action. This is what the enzyme looks like as it goes about its business. Now what's hugely inefficient is the K off. Do you remember earlier I told you the perfect enzyme should have zero K off? In this case, we see the enzyme in real time in action being inefficient. Here it is being inefficient as it goes one, two, one, two, one, two. This is it with the K off. That catalytic inefficiency is what dooms the enzyme and makes it waste opportunities. The enzyme is trying to make up its mind. Go to product, back to substrate. Product, substrate, product, substrate. And that lack of decision is what makes this enzyme inefficient as a catalyst. There are other kinases such as kinases involved in metabolism that are far, far more efficient than this enzyme over here. If you're looking for something, could I ask you just to wait until after the class is over? Oh no, you're here for the class. You're just kind of late. Okay, no problem. Welcome. All right. So anyway, this is the enzyme in action. And what this shows us is that the enzyme fluctuates its speed enormously. Okay? And this is kind of mind boggling. And I'm just going to tell it to you. It turns out that the enzyme's speed fluctuates from second to second by a factor of 100. Okay? So this is like going out to the 73 out here, the freeway out here. And then, you know, pulling up alongside a Honda Civic. And then suddenly the Honda Civic goes from 55 miles an hour to 5,500 miles an hour. And then back down to 50 miles an hour all in the course of a second. So these enzymes are wildly changing their speeds. They're changing speeds much faster than any runner. They're changing speeds up to speeds that are wildly, that are almost inconceivable to us humans. And really, that's the stuff of life. It's essential that this enzyme is able to alter its speed in order for it to be regulated. Remember earlier, I talked about the regulation of protein kinase A. That regulation is going to control its speed. And in doing so, that turns the enzyme from being a wildly efficient catalyst to being a catalyst that's not even worthwhile, that doesn't operate on a time scale that's useful for the cell. And this is really the essence of how catalytic biology takes place. Okay? Any questions about this? Yeah? Do you see any pattern along that, the on, off, or just randomly process? Okay. This is a brilliant question. We spent a lot of time looking for patterns in our data and looking for correlation between one step or another. And we, it sure is pretty random. There is a small amount of a memory effect in the sense that the enzyme, if it hits this intermediate state, it's more likely to go down to three than it is to go back to one. Okay? And so that's actually a thermodynamic effect. That the enzyme has evolved to do K-cat in preference to K-off. All right. Let's talk about another hydra, let's talk about a different enzyme. The second enzyme I want to talk, I want to talk to you about today is lysozyme. This is an example of a hydrolase. And remember, I'm moving down, or actually I'm still at the top up here in our most common enzymes. This enzyme was discovered about 100 plus years ago and it's been intensively studied. This is the X-ray crystal structure of lysozyme. And in fact, it was the very first enzyme, X-ray crystal structure ever solved was this enzyme. The active site up here has an 8-angstrom hinge motion. And so this enzyme has kind of a Pac-Man-like motion as it hydrolyzes the glycosidic bonds of the polysaccharides found on the cell surface of bacteria cells. Okay? So here's a bacteria cell wall. And the enzyme is going to cut apart the glycosidic bond between each one of these glycan moieties found on the cell wall. Okay? So it's going to be chopping apart the polysaccharide. In doing this, this will basically burst apart the cell. In the absence of this, the enzyme is basically going to be chewing apart the bacteria. And this has the effect of killing the bacteria, right? You're breaking their cell walls. They explode, et cetera. This enzyme is found in high concentration in chicken eggs. That's the hen egg whites over here. And it's present to prevent colonization by bacteria. And you might recall, Avidin was isolated also from chicken egg whites. So biochemists have been studying what makes eggs so special as sterile vessels for a very long time. Okay. Let's take a quick look at the arrow-pushing mechanism for how this enzyme operates. In this mechanism, the enzyme goes through a covalent intermediate. Let's start over here. So I'm zooming in now to the polysaccharide region of the cell wall of the bacteria. Enzyme is going to cleave this bond that's indicated with an arrow. And this is an example of a hydrolase meaning it's going to introduce water across this bond. So the first thing that the enzyme does is torque this anacetyl-maramic acid moiety. Okay. So it's going to torque this carbohydrate from being a nice chair conformation to being a boat conformation. This is a crucial aspect for what makes enzymes such effective catalysts. This enzyme is going to be catalyzing a reaction a thousand times more efficiently than if the reaction just had to happen by itself. I actually think it's like a hundred thousand times more efficiently. In order to do this, the enzyme is going to be physically bending the substrate. And by physically bending the substrate, this helps to accelerate the reaction. So here it is pushed up into this boat conformation. Notice that the boat conformation neatly sets up an SN2 attack by this carboxylate of aspartic acid to attack backside displacement style this glycoside. Okay. This is crucial. So the glycoside gets protonated by one glutamic acid and then over here an aspartic carboxylate from a nearby aspartic acid then attacks doing a backside SN2 displacement. And this protonated glycosidic bond then is a very effective leaving group because the second arrow highlighted in red over here kicks electrons to a positively charged oxygen which is all too eager to accept those electrons. Okay. So this gives us a covalent intermediate. This is another common way that enzymes accelerate reactions. In this case we're seeing it form a covalent intermediate and then this covalent intermediate gets hydrolyzed. So half of the polysaccharide floats away. The next one comes, a water comes in, gets deprotonated by the glutamic acid, the glutamate up here. And then this hydrolyzes the ester bond between this polysaccharide and the aspartic acid. Okay. So some notable features here. We're showing, I'm showing you an example of acid-based catalysis. The enzyme is simultaneously acting as both an acid over here and as a base over here. In fact, it's even wilder than that. Check this out. It's the same functionality, this glutamic acid that acts as both the acid and the base. And to me that's just an elegant simplicity that makes enzymes so beguiling, right? If we were in the chemical laboratory trying to make molecules, you know, using glassware and such, we'd either dump in a bunch of acid or dump in a bunch of base, but you wouldn't add simultaneously both acid and base because they would neutralize each other. And enzymes have evolved to be able to simultaneously catalyze things using both acid and base catalysis. Furthermore, this enzyme has evolved to form a covalent intermediate, sometimes this is referred to as a ping-pong mechanism that eventually gives us back the hydrolyzed glycosidic bond. Okay, really, really beautiful. This is what you learn in biochemistry classes. And the problem with it is that it neglects enzyme dynamics, which are really critical. As the enzyme moves, it can then help to torque this confirmation of the ring into this boat confirmation. In the absence of this movement, it doesn't make sense really why it is that the enzyme is actually going to be torquing this substrate. Right? Because the substrate binds in the chair confirmation up here. Why should it get pushed into this other confirmation unless the enzyme is doing the pushing? And that, in fact, is what we see. Okay, so same idea. We're going to attach an enzyme to the carbon nanotube and then listen in as the enzyme works. When we do that, here's a paper from the lab from about a year ago. What we see is that, again, the enzyme by itself is relatively quiet. And then when we add the substrate, the polysaccharide, the peptidoglycan that I showed on an earlier slide, the crosslink net, there's an immediate jump upward and then there's all this noise in here. This is the enzyme chewing on the substrate and we get to listen in. It's just like SoundCloud, basically. Okay, some controls. I haven't been showing you controls, but these are everything in biology. This is substrate by itself in red, overlaid. And then this is enzyme by itself. Again, it's relatively flat and purple. And then when we have enzyme plus substrate, we see this motion where what we're seeing here is enzyme open, closed, open, closed, open, closed, open, closed, open, closed, open, closed, open, closed. This is it, you know, on a third of a second. But we get to watch the same enzyme cranking over for a long period of time. And again, what we find is that the enzyme is highly variable. It accelerates, it slows down, it speeds up, it slows down. It accesses different confirmations. In fact, it accesses at least two dramatically different speeds. And you can actually see that in the 40 seconds of data over here. Do you see how there's this dense region and then a less dense region and then a dense region? The dense region corresponds to rapid switching. The enzyme has an overdrive gear that it goes into. So it flips gears to second gear and it just starts cranking along at a much faster speed. And you can see that over here where the enzyme is going, open, closed, open, closed, open, closed, open, closed, open, closed, open, like 300 times per second. Whereas over here it's doing open, closed, open, closed, open, closed like 50 times per second. So this is dramatic. That's six times faster. And what's crazy is that the enzyme does this all day long. It switches between first gear and second gear, first gear, second gear, back and forth. And a big mystery in the field is what's up with second gear? Okay? So to address that question, I'm going to skip some stuff. To address that question, together with collaborators, we chemically synthesize a version of the polysaccharide that didn't have cross links. This is like one strand of the net that I showed you earlier. Same polysaccharide. Now we're using chemical synthesis to access a new substrate. And what we find is that the enzyme has a different type of activity. I do have to show you some more controls. Okay? I can't get away from this. They're crucial. These are mutant enzyme active sites. Do you remember earlier I showed you the carboxylates that are required for enzyme to operate? So we mutated those carboxylate residues and the enzyme no longer works and therefore it never closes. The other one of these enzyme mutations traps the covalently bound form of the substrate that I showed earlier in the ping pong mechanism. And the enzyme never can hydrolyze back off the substrate. And again, it never gets back to closed. Okay. So I get to finally tell you what the difference is between first gear and second gear. This is a day in the life of an enzyme. Here's how it spends its time. Okay? So this is an enzyme crank it along happily being fed either the linear substrate or the crosslink substrate. In the case of getting the crosslink substrate, it gets to hydrolyze things about 50% of the time. But if you feed it the linear substrate, it goes wild. It gets to hydrolyze glycosinic bonds 88% of the time. And so, and then this over here is second gear in blue. That's nonproductive rapid chatter. And over here in the crosslink, the blue is much more apparent. So in other words, the crosslink substrate, the substrate found on the surface of the bacteria cell, corresponds to second gear. So what we think is happening is the enzyme is mowing across the surface of the cell chewing contentedly. Bond after bond after bond after bond, happily hydrolyzing it all. And then it hits one of these peptide crosslinks and gets stuck. And when it gets stuck, its response is to start chattering away. It just, it flips gears and it just starts going six times faster. And what we think is happening is that it then transits along the peptide down to the parallel polysaccharide. So in the same way that DNA is a five prime directionality of three prime directionality, polysaccharides have a directionality as well. And it turns out the surface of the bacteria cell is a highway of parallel polysaccharides. So the enzyme comes along, hits across link, goes down, zooms along, hits crosslink, goes down, zooms along, down, cross, down, across. So what the enzyme is doing is zigzagging across the surface of the cell as it chews apart the surface of the bacteria. And in retrospect, this totally makes sense because again, the enzyme evolved to poke holes in bacteria. And by doing a two-dimensional rip in the surface of the bacteria, this makes the enzyme much more effective at killing its bacterial targets. Questions? Yeah. So one of the active UCs, the actual pylosys, the N1 is it transporting itself? Yeah. Well, it's a nonproductive chatter that we think is that moving along one of these peptide crosslinks. So, yeah, Anthony. So as soon as it encounters something, it can't press through, it'll switch back over. Until it finds a new glycosidic bond and then it goes to town again. Okay. Well, let's move on. I want to talk to you about other enzymes. We have lots to talk about. I want to talk to you very briefly about proteases, which cleave amide bonds. We've seen these. We've seen examples of these. In blood, there's a whole cascade of proteases that are used to respond to damaged blood vessels with a series of factor 7a, factor 10a, et cetera. Proteases where one cuts one and the next one cuts another, next one cuts another, et cetera. All the way down to the point where you get production of fibrin, which then could crosslink to form to replace to fix the damage up here. Okay. So this is an important mechanism for blood clotting. And naturally, if you're missing any one of these proteases, or one of these proteases happens to be mutated, you're in big trouble. Your blood will not clot. This happens in inbred families, such as the royal families of Europe in the turn of the century. This is the czar, Nicholas II. His wife Alexandra passed on the gene for hemophilia to their son Alexi down here. And again, this is a mutation to either factor 9 or factor 8, which are both excellent genes. These are genes that are found on the X chromosome. So they're passed along by the mother. Okay. Apatosis is also regulated by a series of proteases. Apatosis, the cell suicide mechanism that we talked about earlier in this quarter. And each protease activates the next one. So you have a protease up here called the caspase that cleaves the next caspase in line, which then cleaves this, et cetera. So this guy cleaves this guy, which cleaves this guy, et cetera. Let's take a closer look at an example of a protease. The protease I want to show you is one of my favorites. It's isolated from one of my favorite fruits, papaya. And it's perhaps appropriately called papine, because it's isolated from papaya. It's an example of a cysteine-based protease. And it's furthermore another example of a nucleophile-based enzyme mechanism. And I chose this one because cysteine, of course, is the preeminent nucleophile. As illustrated earlier today when we talked about engineering in a single cysteine on the surface of the enzyme as a way of attaching it to a specific spot on the carbon nanotube. Okay, so here's the mechanism for how this enzyme works. In practice, it's actually a fairly complicated mechanism, sorry, it's a complex mechanism, in the sense that it's a concerted mechanism. Specifically, here's the nucleophilic file functionality of the cysteine in the active site. I don't think I pointed it out here, but here's the cysteine. That's the business end of the molecule. It is active site. And in a concerted mechanism, the cysteine is simultaneously deprotonated to attack nucleophilically the amin bond. And then this amin bond carbonyl gets protonated by an acid residue that hovers above the carbonyl of the amin bond. This happens in one fell swoop from nucleophile to protonation all at once. And that is really that kind of concerted dance of catalytic efficiency is yet another example of what makes enzymes so special. The fact that everything is kind of held together at once lowers the transition state energy. Right, so now you don't necessarily have to stabilize a protonated carbonyl in an active site. Instead, you wait until electrons appear up here on this oxygen before it gets protonated. So that lowers the transition state energy for this transition state. Okay, there's other ways of depicting this as well. I prefer this one concerted mechanism. So all of these, okay, actually I'm going to skip the serine base proteases. They're similar to the cysteine base protease I showed earlier. I'm going to skip the zinc proteases as well. There's quite a few others. Like the kinases, the proteases, because they're involved in crucial processes in human physiology such as blood clotting which you would not want to happen, you know, here and there, because these are involved in such crucial processes, the enzymes themselves are tightly regulated, oftentimes regulated by some loop in a pro enzyme that has to be cleaved. Where the terminology of pro means a reaction takes place that then converts it into the active functionality of the molecule. So for example, pro drugs are precursor drugs that are then converted into the active drug by some enzymatic process. And over here, we see a pro enzyme that has a loop blocking access to the active site of the enzyme that gets cleaved and then that allows the pro enzyme piece to dissociate and turn on the enzyme. So enzymes can be very readily inhibited. You can do things like have transition state analogs. We've talked about transition state analogs before. Here is the transition state for hydrolysis of an amine bond and here is a very effective phosphor amide transition state. Notice that this also has the tetrahedral geometry of this transition state up here. And if you do that, you can actually very effectively inhibit this enzyme. Other types of inhibitors, phosphonates down here, phosphor amides over here, these KIs are the dissociation constant for binding where it's KD except it's for inhibiting, binding and inhibiting the enzyme. And again, smaller numbers equals more potent enzymes. And notice what a champion phosphor amide is with a picomolar, near picomolar inhibitor. Okay. Now, there's a million other things I could talk to you about. I'm going to pick them up next Thursday. When we come back, we'll be finishing off Chapter 6 and going on to Chapter 7. Midterm we'll cover through today's lecture.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Instructor: Gregory Weiss, Ph.D. Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:00:41 Enzymes 0:03:20 Repeat Proteins 0:05:16 Equilibrium Constants 0:06:25 Enzymatic Catalysts = Catalytic Receptors 0:07:03 Michaelis Constant for Measuring Catalysis 0:15:30 The Perfect Enzyme 0:21:29 Kinases: Phosphorylation of Ser/Thr or Tyr 0:34:19 Why Study Single Molecules? 0:37:44 How to Follow Enzymatic Catalysis with Single Walled Carbon Nanotubes 0:41:34 Single Biomolecule Bioelectronics 0:44:24 Before and After Enzyme Attachment 0:46:29 Watching cAMP - Dependent Protein Kinase A 0:49:18 Further Generalization: Protein A Kinase 0:55:55 Lysozyme as a Model Enzyme for Glycosdie Hydrolysis 1:08:22 Proteases Cleave Amide Bonds 1:12:37 Regulation of Proteases Through Pro-Enzymes
10.5446/18871 (DOI)
Welcome back. This week we get to talk about protein function, which is the follow on to last week's discussion of protein structure. And I would say last week was all about pretty molecules. We saw beautiful structures, architectures that were fascinating, but I didn't tell you what makes them so special. I didn't tell you why it is that those structures allow proteins to do unique things. And proteins really are the superheroes of biology. These are the molecules that make it possible to change around whole solutions. And they make transformations possible that otherwise would be kinetically and thermodynamically inaccessible to the cell. So they're doing reactions, they're catalyzing reactions that otherwise without these, biology would not be capable of taking place. Okay, so specifically this week we're going to be talking about sub-farmicology. We're going to be looking at dose dependent response. We're going to look at non-covalent binding. And then by analogy to non-covalent binding, we're going to make the leap to catalytic binding and we're going to try to understand how enzymes work. I want to talk about how we're going to measure enzyme activity. We'll talk about how they're regulated, we'll talk about their mechanisms, and then we'll talk about mutagenesis engineering. So these topics in here are going to give you the foundation that you need to make predictions about how enzymes work. And the overarching goal is for you at the end of this series, at the end of this two lectures, for you to be able to look at some reaction and then maybe not design the perfect enzyme, but make some predictions about how that enzyme might work. Okay, let me give you an example of that. And this is an enzyme that we won't be talking about, but it's one that I want you to think about. This is actually a quote from the Iliad, and the quote is, as the juice of the fig tree curdles milk and thickens it in a moment, though it is liquid, even so instantly to pay on cure fierce Mars. This fascinates me. So in this reaction, you can take a branch off a fig tree, break it open, and actually this kind of milky liquid flows out. And you can drop that in a big bucket of milk. And what will happen is exactly what's described here in the Iliad. Now, our goal is to understand how an enzyme like this might work. If I tell you that the milk is getting solidified by a certain reaction, then you might be able to predict what the enzyme is, the enzyme mechanism is that makes that reaction possible. Okay, so that's our goal. A little bit of a mystery to set things up. Let me talk about some announcements first, and then we'll get into the meat of the discussion. So this week, please read chapter seven, work the odd problems as usual. Our midterm is going to be a week from Tuesday. It will cover through chapter, actually it's going to cover through chapter six, not seven, my bad. And it will be comprehensive in the sense that there might be some concepts from the first three chapters, but it will focus largely on the more recent material. So when you're studying, what I'd like you to focus on are ideas and problems that are in the assigned homework, such as these odd problems in every chapter. I'd like you to focus on the problems that are discussed in discussion, and then I'll also post a sample midterm, which will give you an idea of the types of problems that I'm expecting you to know. So hopefully you're already starting to study for this, and that will be coming up pretty soon. Also coming up, abstracts for the proposal, the final proposal report, are due this Thursday at 11 a.m. We've already talked a little bit about the format of the abstract, but I've also released on the website more details about the proposal assignment, and I'd like to take a moment just to review those with you now. So very briefly, let's take a quick look. And on the website, here's the website of course, here's the proposal assignment, I have a very detailed description of the chemical biology proposal that you're going to be writing. In brief, what it tells you is that you need a simple idea. So in this first paragraph it says, don't come up with something that's the next Manhattan project. Don't tell me if I get a billion dollars, I could do something like solve toenail fungus or something. Tell me something that I can do for say $100,000 or even less, $10,000 let's say. Those are the kind of proposals that attract attention, clever ideas, things that people hadn't thought of, that shows brilliance, that shows creativity. This sort of the big ideas, sequencing a thousand genomes, there are people who are doing that and they're creative in their own rights, but that's not necessarily what this class is about. It's a simple creative idea that interests me. And let me talk to you about some ground rules. You must choose a topic that will improve human health and well-being broadly. So there's lots of ways to do this. They could be things like improving the energy situation on this planet. If you have a new way of generating energy using enzymes, I would love to see it. And that would improve human well-being broadly. So the focus though of your proposal must be squarely qualified as chemical biology. If your proposal does not hit the topic of chemical biology, I will know in the abstract and I'll give it back to you and tell you to change it largely. It's very important that it fits the definition of chemical biology. Your proposal must have a hypothesis or a very good reason not to be driven by to testing a specific hypothesis. And then after that, you need to think about backup plans, creative variations, and further insight. So good proposals are a little bit like an onion. At the cart, you have some clever idea. You have something that if you could do this clever idea, it's going to change the world. But equally importantly, you have a lot of little backup plans and contingency ideas. So if the main idea doesn't work out, you have a bunch of backup ideas that are waiting in the wings that are going to rescue the whole thing and turn it around and make you famous. Okay? So that's the ultimate goal of a good proposal. And along those lines, you really should be having more than one idea. So a proposal has one great idea and then there's a bunch of other little ideas that are kind of supporting it. Do not propose experiments that require human subjects or samples obtained from humans. This is important. I know many of you want to go to, I don't know, dermatology or medical school and become dermatologist later. And I've got proposals about picking scabs and things like that. Those do not interest me. That is not what this class is about. You know that's not what this class is about. So I will not accept any proposals that require you to collect samples from humans. Your proposal must include control experiments. We've discussed these. We've had a negative control. So we've discussed that. And then the next part is coming up with ideas. So how do you do this? The first thing you need to do is come up with a clever idea. Here is one very simple formulaic way to be brilliantly creative for the rest of your career. Okay? If you learn this formula, you can be incredibly creative. Okay? And I'll be honest. This has always worked for me. All I do is I take a new technique and then I simply apply this new technique to the problem that's already existing. Okay? And you too can do this. We've been talking all quarter about new techniques. You have new ways of screening libraries. We talked about RNA aptimers, for example. We talked about phage display. We're going to talk today about measuring enzyme activities. There are all kinds of neat new techniques that you could use. You simply apply those new techniques to an existing problem and boom, you're creative. That's all it takes. That's all you have to do to be creative is you scan down the list of all the new techniques we've presented in this class. You scan down the list of problems. You take column A and you take column B, one from A, one from B. You put them together and again, boom, you're creative. That's all it will take for you to come up with a creative new idea. That it is essential for this idea to be creative, for it to be novel, and if it is not, I will return it to you ungraded. Okay? I'm fiercely defensive about creativity. This must be creative. Okay. Now, the other thing is after you come up with this idea, you need to verify that in fact it's original. This idea can't be so outlandish as to be impossible, but on the other hand, if it's already but done, it doesn't count as creative, even if it's brilliant. What I usually do next after I come up with the creative idea is simply type it into PubMed, simply type it into Google and see what else has been done in that area. If it turns out that someone else has done this idea that I thought was brilliant, Yahtzee, Yahtzee, I think that's fantastic. That tells me that my idea was brilliant enough that someone's willing to invest their own money in it. It doesn't bother me at all if I'm coming up with ideas that other people are willing to invest in and do. In fact, if anything, that tells me that I'm on the right track and that should also tell you that you're on the right track. Don't panic if it turns out that other people have come up with the idea before that's perfectly acceptable and it's actually kind of normal. It's a good sign it means you're going in the right direction. We talked a little bit about different ideas. Let me scroll down a little. Here after you finish screaming Eureka about your idea, the real work begins. You have to dig into the literature, learn the field just a little bit, and know something about the area that you propose. That's really the real work of this proposal. If you spend all of your time trying to think of the original idea, you're wasting time. 95% of the effort comes after you have the idea. Only give yourself a limited amount of time to think of the new idea. After that, start doing research and get that idea into shape. Don't spend a lot of time just cycling through ideas and kicking yourself and saying, oh, it's not the world's greatest idea. Fine. You don't need the world's greatest idea for this assignment. What you need is an ability to argue successfully for that idea. That's what I'm grading you on. Along those lines, focus on that sort of thing. 95% of the work comes after you have the idea. Okay. We've talked a little bit about the format of the abstract that's listed here. Here's some stuff about the format of the assignment. The assignment is going to be around five pages, not more than 10. I don't want to read it if it's longer than 10. No one wants to read it if it's longer than 10. Somewhere in there, there should be lots of figures. This is important. The level of detail should be sufficient for someone in this class to understand what you're proposing. You should be able to hand it to a random stranger in the class, just turn to the person on your right, and hand him or her the proposal, and then that person should get some idea of what's going on and should be able to judge it. That's the level of detail. You don't need to tell me about every experiment. You don't have to tell me where you're going to buy the materials and stuff like that. Most of all, you don't have to do the actual experiment. Good news. Your Visa card is safe because I'll tell you, those experiments are expensive. Then finally, if you'd like to have your graded proposal return to you with comments of plenty, then you have to give me in advance attached to the assignment a copy or a self-addressed stamped envelope. If there is no self-addressed stamped envelope, then I'm going to assume that you don't want any written comments back on your proposal. That's quite all right. I just don't want to spend time writing comments if it turns out you're not going to pick it up. If you want comments back, I will take the time to comment on your proposal. The TAs will take time to grade it and comment on it as well. You'll get back something that has some feedback to you. I know not everyone wants that, so it's totally up to you. Last thought, it's important as usual that you turn it in through the turnitin.com website. This will be scanning for plagiarism. Bad news. I picked up plagiarism on the last assignment, which is disappointing to me. Having it every year, though, drives me nuts. We've already talked about it. I'm not going to spend more time belaboring it. Okay. Now, now is the time. I usually ask you for questions. If you have questions, don't hesitate to shoot me an email. Ask the TAs. Miriam, Krithika, know a huge amount about this assignment. They can help you. Okay. Let's move on. Next, I want to talk to you about office hours. I have office hours this week, even though I'm not here on Tuesday. I will have office hours at the usual times and usual places, Thursday, a media after lecture, and then Wednesday, 2.15 to 3.15 in my office. Miriam has her office hour on Friday, and Krithika has her office hour on Tuesday, usual times, usual places. Okay. That's it for the announcements. Let's get on with our regularly scheduled program. I want to talk to you about enzymes. Enzymes are truly remarkable. They are attractive to celebrities like these two, and they're also attractive to chemical biologists. And I would say they're attractive for the same reasons. They're attractive because they do transformations that would otherwise be inaccessible. Enzymes make possible the things that otherwise would be impossible, that would otherwise just take too long or require too much energy, and we'll talk a little bit about how they do that. And I'm just so entertained when I find celebrity endorsement of my favorite topic. Here in the New York Times last year, actually on the same day I was delivering this lecture last year, February 22nd, 2012, enzymes try to grab the spotlight. And there are tons of enzymes that are found in papaya. They're called papain, and they're notable for digesting proteins. They're notable for digesting skin, for example. They're actually used as treatments for people who are undergoing therapy after bad burns, after bad high-temperature burns, and it's a way of digesting away necrotic tissue. And these guys have clearly some good facial structure here, so maybe they're onto something. Okay, let's talk next about how this topic ties in with what I told you about on the last Thursday. Last Thursday I was showing you how simple rules can dictate protein structure, and in a moment we're going to then be applying these same rules to understand enzymes. Protein structure leads to protein function. So the shapes that the proteins were assuming on Thursday are what allow the proteins, these enzymes in this case, to actually acquire their unique function. And what I found so beguiling about this idea of conformational analysis is that the rules are so simple. We're talking about something as simple as just Eclipse versus Staggered Ethane or Gouche versus Anti-Butane. This fascinates me. This will keep me running to work for a really long time, because that's such a simple idea. These are such foundational concepts in modern chemical biology that I can explain to my grandmother. These are things that totally make sense. It makes sense to you that you would want to avoid electrons which are all, electrons banging into each other. Electrons hate banging into each other. It makes sense that things should try to spread it furthest apart. And if we think about things in that way, then such complex structures as the proteins that we're about to look at, their structures start to make sense as well. We talked about how as a consequence of these simple rules, some amino acids can be found in specific types of secondary structure. And there are tables of this. And you could even make predictions about the secondary structure of proteins based on nothing more than the amino acid sequence. And surprisingly, these predictions turn out to be pretty good. They're about 80% accurate or so. And from there, people have been attempting to predicts protein structure for a long time, and quite a bit of progress has been made in recent years towards that goal. We also discussed disulfides which provide spot welds that hold together independent regions of protein structure that otherwise would sort of flop apart. And we also talked about how readily exchangeable these disulfides were to allow formation of new disulfides and exchange of one disulfide for another. The next topic we discussed was this hierarchy of protein structures from primary structure to secondary structure to domains to tertiary structure to assemblies, et cetera. And so this helps us organize our thinking as these assemble into complex architectures. So we're going to be making reference to this as we start talking about enzymes. And then we ended with the concept that a relatively small number of protein domains are found very commonly in the human proteome. Now the truth is I didn't finish the discussion of this. I need to pick up just very briefly with just a little bit more about protein structure. And then again, we're on to enzymes. So enzymes or last thoughts on protein structure. This is an example of an all beta sheet protein called an immunoglobulin domain. These oftentimes assemble into long strings. And these are used in proteins like Titan. Titan is found in muscles. Have you ever wondered why muscles are so strong? You've probably seen, for example, on the Olympics, you've seen Olympic weight lifters. You've seen them flexing these steel bars. They're lifting this stuff up. And the steel is flexing. And the Titan in their muscle is hanging on. Well what happens is there are long strings of these immunoglobulin domains that are lined up like beads on a string. So you have IG domain. That's another word for immunoglobulin. So you have IG, IG, IG, IG. And as you pull on the ends, one of these can unfold without snapping the whole muscle fiber. So the Titan actually is holding things together. So an individual domain can unfold without breaking the entire protein chain. And that gives muscle and specifically gives Titan found in muscle some remarkable properties as a material. Okay, so on the left, this is the protein that's found in muscles. And I'm only showing you one immunoglobulin domain. Notice that the ends are 180 degrees apart. This then sets up these long strings of Titan that can extend, you know, up to the roof up here and then down through the floor down here. You can have many, many numbers, large numbers of these lined up. On the right, here's an example of an enzyme along the lines of the kind of thing I want to talk to you about today. This is a really remarkable enzyme. This is one of those enzymes that makes it possible for you to live on this planet. This is an enzyme called superoxide dismutase. And its active site is actually not where you expect it to be. I think when you look at these immunoglobulin domains, these are beta sandwiches, your expectation is that the inside is this kind of deep cave. And that's not the case. Instead, this inside is chock full of side chains. This is full of stuff. It's only actually on the outside where the real action is taking place. And over here somewhere is where the active site of the superoxide dismutase is found. And incidentally, it's mutations in this enzyme that are responsible for diseases like Lou Gehrig's disease, which is, you know, a truly terrible disease. And so mutations to these kinds of proteins, these enzymes have very, very serious medical consequences. Okay. Last thought on immunoglobulin domains. They're also, they're named after the antibodies for which they're found in. And this is, again, one of these truly ubiquitous domains. Each one of these lobes over here is an immunoglobulin domain. And notice how these are just all strung together into an antibody. It should come as no surprise to those of you who were attending last week's lecture that the business end of antibodies, which are professional binding proteins, are found at the loops. Okay. So these antibodies are designed for binding to things. That's their role in life, their professional binding proteins. And in order to get that kind of binding, the loops out here are exactly where you can pick up that kind of binding. Those loops are flexible. Remember, we talked about the low number of hydrogen bonds to the backbone. We talked about how the loops can accommodate many different shapes and sizes. That's what equips antibodies with the ability to recognize foreign attackers. Right? You cannot set up an advance an antibody against everything on the planet. These have to be just ready to pick up random things that you might encounter when you visit, I don't know, the taco wagon out here or something like that. So you have to be ready for that kind of thing, not knowing advance what the shape is going to be. So having these flexible and molecular recognition versatile domain with these loops over here to accommodate diverse binding partners turns out to be key to understanding their activity. All right. Another beta sheet protein, a good friend of mine. One I've published many papers either using or studying is a protein called streptavidin. And good news, we're going to be talking about this again in about 15 slides from now. So it gives me a great pleasure to introduce you to the wonderful streptavidin. Streptavidin is charged with binding to biotin. And again, this is an all beta sheet protein. Oh, there's a little tiny alpha helix, but it's largely all beta sheet. And it forms these wonderful little beta barrels. At one end of the beta barrel, this small molecule called biotin sits. And we'll look at the structure of biotin in greater detail in a moment. This is an example of a small molecule. The molecular weight of biotin is 254 grams per mole. It's tiny, tiny, tiny. And biotin is trapped by the quaternary structure of streptavidin. And this is an important concept. If we look here at, say, the secondary structure, or sorry, the tertiary structure of biotin, streptavidin, your expectation is that biotin is not going to be very firmly held in this beta barrel, right? I mean, look at all the space. You can imagine the biotin just floating away and coming out of the streptavidin without very much trouble. Okay? The quaternary structure doesn't begin to hint at the extraordinary abilities of this molecule to grab on to biotin. And here's what I mean, okay? So now this is the quaternary structure of streptavidin. And streptavidin consists of a homotetramer of four streptavidins that are noncovalently joined together to trap biotin. And notice there are four biotins bound here. Now what's happening is the alpha helix from a neighboring streptavidin is sticking down over the top of biotin. I've highlighted that for you in black over here. And you could see it's actually forming a trap door to slam down over the top of the biotin and prevent the biotin from floating away. That turns out to be key to its activity. So streptavidin evolved to bind biotin with astoundingly high affinity. And we'll talk about the exact number in a moment. But it is really extraordinarily high. And it evolved to bind this cofactor called biotin as a way of killing any bacteria that happen to be present. So for example, a related protein called avidin is found in egg whites. So egg whites have a high concentration of avidin. And that means that if any bacteria try to colonize the egg, you know, I'm talking about hen eggs here. If any of those bacteria try to get in there and go to town and eat all the juicy richness of a wonderful egg white, they're going to die because their biotin will get sucked up by the streptavidin and then trapped almost permanently. And that turns out to have fatal consequences for the bacteria. Turns out that it also can have fairly fatal consequences for humans. If you live on a diet, nothing but egg whites, your biotin will also get pulled out of your body. And this is kind of an astonishing fact because it turns out it doesn't take a lot of biotin for you as a human to survive. Biotin is an essential cofactor in the synthesis of lipids. But however, there are eccentric people out there who constantly do experiments like this. And when they show up in hospitals after eating a diet of nothing but egg whites, the physicians tend to be totally baffled because they're not used to seeing such bizarre symptoms. And so there was a case, I'll send it around. I think this is the New England Journal of Medicine of this English guy who showed up and was living on a diet of nothing but tea and egg whites. And he had all this bleeding out of his gums and his pores and anyway, he was falling apart basically. And this astonishing thing is it would take just a few micrograms a day, a biotin, for him to be totally healthy. But there's such a high level of avidin, and it's homolog again, distrapped avidin, present in egg whites that it was actually leaching all of the biotin out of his body. Okay, pretty extraordinary biochemistry, pretty extraordinary molecular recognition. It absolutely fascinates me to understand how this works better. It's something I've spent a lot of time thinking about. Okay, and yet another example of a very common protein fold that's found in the human protea genome, the WD proteins consists of these beautiful propeller-like assemblies where there's actually seven of these little triangles. They look kind of like slices of a pizza that come together to form these large assemblies. These act as scaffolds to organize big machines found inside your cells. So each one of these faces over here might bind to a different protein and bring it together like it's an assembly line for putting together really complex things inside your cell. Absolutely fascinating stuff. Something else that's truly bizarre. What's up with the seven-fold symmetry? I think I'm ready for four-fold symmetry, but we humans don't like to think in seven-fold symmetry. So I'll just leave that as something for you to puzzle over. So another very common, and I'm switching gears, we're going down the chart of the most common protein structures found in humans. Collagen is a very common protein structure. It's actually an unusual three-stranded coil. It differs from the alpha helical coils that we saw. The twist is different, and it's a little hard to see, but there's actually three different colors. There's a green color, a purple, and a blue color. So there's three different colors here. There's these three strands that are winding around each other, and this makes a very strong framework for collagen. Collagen is another one of these ubiquitous structural proteins found in the body. Notably, this protein requires a post-translational modification, introducing a hydroxide into proline residues, and that has the effect of setting up one particular preferred structure, particular twist in the collagen triple helix. Without this hydroxide, the protein is unable to assume that confirmation. All right, let's switch gears. I want to talk about GPCRs. This is the class of proteins that makes it possible for you to see me, that makes it possible for you to smell, for you to taste. I don't know if you've heard too much. Any of your senses totally depended upon this class of proteins, so I think we should take a moment and be grateful for their existence. How do these things work? This is a really, how is it that you're going to sense a photon? How can you actually respond to light, which is, you can't bind light, right? So how do these things work? So in short, these sensing proteins, again, they're called G-protein coupled receptors, or GPCRs, are all alpha helical, and notice that, you know, they each have seven alpha helices. These seven alpha helices transit plasma membranes in the cell. So going through here is the plasma membrane. So there's an outside, an exterior, and there's an interior facing the cytoplasm, exterior is facing the extracellular milieu. And these change confirmation upon binding to things. In the case of a photon, the GPCR responds by having a isomerization of a carbon-carbon double bond. So the photon hits, it isomerizes this carbon-carbon double bond, flips it from one configuration to another configuration from trans to cis and back, and in doing so, that rearranges the confirmation of these residues down here that are found on the inside of the cell. Okay, last thought. Notice that this is a coiled coil, and if you look carefully at this, it is a left-handed coiled coil. Almost all of the coiled coils found in nature are left-handed. Notice that your hand can trace this out in a left-handed manner. Doesn't want to trace it out if it's right-handed. Okay, so these are very common because alpha helices in general are hydrophobic secondary structure. They fit nicely into membranes, and this property makes them very useful for sensing what's happening outside the cell and communicating it to the inside of the cell. Next one, here's another example of an alpha, sorry, I'm going to talk very briefly about alpha-beta proteins. This is an enzyme that we'll talk more about very shortly. This is a barrel-like protein. Here's the barrel in the center. This is the active site. So you can mix and match alpha helices and beta shoots. I'm showing you this because I didn't want to give you an impression that secondary structures never mix. In fact, they're very commonly mixed together. Repeat proteins. So repeat proteins are another very common assembly of proteins. These are, this is an example of an anchor and repeat protein. Notice that it has two sides. The concave side has these loops, and wouldn't you know it? The loops turn out to be the key to this binding activity. This is a protein that's used, it's very versatile. It's used in many different contexts, and what happens is these loops are mutated to give a particular binding property that's then useful for the cell. This corollary is a leucine rich repeat, which now has loops on the convex side. The anchor and repeats had loops on the concave side. This one has loops on the convex side. These loops then can grab on to the binding partners. Now I guess to what we saw with the antibodies. These two are very versatile structures that can be evolved through organisms, pretty readily, and that gives, that equips organisms with new binding activities, which in turn can be used to respond to environmental changes, etc. All right, last structure that I want to talk to you about, peptide binding domains. This is an example of the SH2 domain. This tiny little domain shown here in yellow binds phosphotyrosine proteins. So proteins that have been post-translationally modified using a class of enzymes called kinases, which we'll talk about in a moment, bind to this SH2 domain and fit into very deep pockets. So there's a deep pocket down here. So these evolve basically to have specificity, to bind to a particular sequence, so they're not binding to every phosphotyrosine, they're picking out specific binding partners. Another peptide binding domain of note are SH3 domains. These bind polyprolene helices. This is the I plus 3 helix, the 310 helix that we saw during Thursday's lecture. And this binding pocket is a very shallow one. It almost looks like the peptide is resting on the top of this SH3 domain. It's like a butterfly, kind of a lighting on the top of the SH3 domain. And notice how delicately folded it is into this threefold symmetry helix. So you can actually, if we look down one axis of this helix, you can see how it's forming this polyprolene I plus 3 helix. Okay, it's hard for me to talk about this without having a few favorites. And I know as a chemical biologist I shouldn't have favorite molecules, but I do. This is one that I spent years of my life thinking about in my wasted youth. I was interested in these MHC receptors for reasons that are too bizarre to explain right now. But suffice it to say, these are the receptors that your immune system relies on to let it know when the red coats are coming. This is, these are the receptors that raise the alarm when foreign invaders are trying to take over your physiology. And the way this works is a small percentage of peptides that are synthesized by the cell are digested and then displayed out on the surface of the protein, just like little flags. And the idea is that if a virus has taken over the cell, little flags of virus will appear outside on the surface of the cell and the immune system then knows, oh no, that cell has been infected with viruses. I better kill the cell. I better mount a strong response. This is a very effective way of alerting the immune system. OK, here's some old friends as well. Notice down here, do you recognize that domain? Does that look familiar? Yes, that is the same domain we saw a few slides ago. This is the beta sandwich immunoglobulin domain. And here it is making a cameo in a slightly different but equally important role. Here it is actually lofting the peptide out off the surface of the cell. Down here is the surface of the cell. Here's the scaffold of this immunoglobulin domain kind of holding it above the surface of the cell, making it a little easier for the flag to be seen by passing T cells, especially cytotoxic T cells that take an interest in these things and then can go into action and kill the cell if necessary. All right, one last example of this. The last example is a slightly different variant of these MHC receptors. In this case, this is one called class 2, the one on the previous side was called class 1. The detail is not so important, but this guy actually displays peptides, not that are being synthesized by the cell necessarily, but peptides that are being engulfed by the cell. So what the cell is kind of randomly taking up extracellular material. And so again, this gives the immune system a different look. So the MHC class 1 tells about what's, it reports on what's happening inside the cell. The class 2 reports on what's happening outside the cell. And notice that the structure of this peptide, when viewed down the same axis that we looked at earlier from SH3, it again starts to bear some hallmarks that we've seen before. Notice that it also has that sort of threefold, it kind of looks like a triangle type of geometry. And yes, this is also assuming a polypterlene type helix, analogous to the I plus 3 helices. That we saw earlier in this class. So hopefully some of the concepts that we saw earlier are finally coming into play. And what I want you to do is I want you to first have an aesthetic appreciation of these things. These are beautiful. I like to think of this one as like a hot dog in a hot dog bun. I mean, look at this thing. It's so juicy, I could eat it. But equally importantly, it has these immunoglobulin domains, again, that have the wonderful function of lifting it off the cell surface. At the same time, it's presenting lots of surface area out here so the cell can recognize whether or not this flag belongs to self or non-self. It can even interrogate this one receptor and determine whether or not the original cells are from self versus non-self. And this is really one of the challenges for organ transplantation is to deal with this class of proteins where everyone, where different humans have different MHC receptors. And this is one way that the immune system keys in on organs that have been transplanted and knows to kill them. Okay, now obviously I can talk about this particular topic for hours, but we don't have hours so we'll have to move on. All right, I want to finish our discussion of protein structure by talking about higher order assemblies. And I guess our best example of this is a structure that I introduced to you earlier, collagen. Collagen, again, plays this key role of structurally strengthening bones, of joints, of doing all kinds of important things. This is yet again one of those proteins that makes it possible for the weightlifter to hoist deadlift over his head and have the bar flex. I've always wanted to do that, but I don't think that's going to happen. But just to see the bar flexing is a thing of beauty, right? How is that possible? Okay, so obviously these things are really strong. Collagen is remarkably strong. It also is assembled into an ordered structure where that ordered structure supports each of its constituent fibers. And I guess the best example of this that you'd be familiar with is something like a rope, right? The individual strands of the rope of a fibery rope, not so strong, but when you wind them together and they're all supporting each other, it's something you get something really strong that can anchor an aircraft carrier to a dock or something like that. All right, so here's the way collagen works. You have to control its assembly in a way so that it doesn't get, it doesn't, you know, kind of prematurely wind itself up and then get all tangled. Okay, so what happens is the triple helix is formed with some caps on the end and these caps on the end remain for quite a while. So the end cap pieces are then brought together and then the whole assembly with the cap still in place is secreted outside the cell and where these red arrows are, these caps are then snipped off using proteases, the scissors of the cell and that gives you a formation of fibrils. Okay, so without that, this does not happen. Okay, so without that, you get this kind of sticky tangled up mess, but what ends up happening instead is a very detail oriented assembly where each step in the process is carefully controlled and that's essential to making structures that are really strong. Okay, now I want to move on. I want to talk to you next about enzymes. We've seen protein structure and not about protein structure. Let's talk about what they're good for. Okay, obviously they're good for strength and structure. I want to talk about catalysis next. Okay, so in order to talk about catalysis, I have to introduce you to some measurements of strength of binding, of catalytic efficiency and so the first thing I'm going to have to do is to find a few equilibrium constants for you. The first of these is used to describe the strength of a non-covalent receptor ligand interaction where the receptor is indicated in R and the ligand is this little sphere indicated in L. Now if you have a bunch of receptors on the surface of the cell that want to bind to ligand, the ligand is going to hop on, it's going to hop off, it's going to hop back on, it's going to hop back off. So we need some way of describing the occupancy. How many of the receptors are actually bound to the ligand? How many of the ligands are free in solution? So the way chemists do this is using equilibrium constants. These equilibrium constants are kind of special so they get a special name but they're more or less the KEQ that you learned back in Chem 1. So here's the way this works. We can describe some receptor ligand interaction that's formed as having a dissociation constant in which the receptor and the ligand dissociate from each other and that dissociation constant KD is equal to, you know, concentration of receptor times concentration of ligand divided by the concentration of the complete receptor ligand interaction. The inverse of the dissociation constant is the association constant abbreviated KA. Okay, but again, these are just fancy equilibrium constants. However, they tell us quite a bit about the strength of an interaction and I'm going to be referring to them. One thing you need to know is that a lower KD, a lower dissociation constant, means a stronger interaction and we're going to stick with KDs. Okay, everyone out in the pharmaceutical world, in the biochemical world, discusses things in terms largely of KDs, KAs. You basically, every time I hear someone give me a KA, I mentally take the inverse of that number and then think about it in terms of KD. It's just a convention. Okay, but what matters is that a lower KD means a stronger interaction. That means more of the ligand is bound here. Right, so you have more of the complex form, bigger number down here, lower KD. Let's take a look at some of these. Okay, I've been talking to you a little bit earlier about organ transplant and rejection. So after people receive transplanted livers, they're given a class of drugs called immunosupply. They're immunosuppressants that suppress the immune system. And we know quite a bit about receptor ligand interactions through classic studies done by Stuart Schreiber and others that looked at how these immunosuppressants work. Here are two examples of immunosuppressants. On the left is a small molecule called FK506 and on the right is a small molecule called Rappomycin. They both work by targeting a binding protein that the Schreiber laboratory named FKBP for FK506 binding protein. And here it is neatly fit into this receptor, which is FKBP. So the ligand is the immunosuppressant drug. The receptor is the FKBP. And notice that this is finding a really deep binding pocket to bind to. Let's zoom in. Okay, let's take a closer look. So imagine now that we can zoom in just looking at the green ligand. What we would see is something like this, where in blue this is the region of the small molecule that's bound by FKBP. So again on the left is FK506. On the right is Rappomycin. Notice some similarities here. Notice that in blue these largely have the same structures. That's not a coincidence. That same structure helps orient the molecule and make it so that this half of the molecule can very readily bind to FKBP. Notice that the part that's not shaded, these two are wildly divergent. Okay, molecule on the left, the part that's unshaded looks completely different than the thing on the right. Okay and that's not too much of a surprise either because it turns out that these two molecules affect different pathways in T cells to suppress the immune system. And these ligands act as sort of like the meat in a molecular sandwich and they recruit two different top layers of bread. This one over here recruits a different protein than this one over here. However, both of these molecules bind to FKBP with very high affinity. And the way we know this is high affinity is we refer to their KD, the dissociation constant, as having a subnatomolar KD range. That's really good. That's really, really tight binding. And it turns out that most of the pharmaceuticals that are approved tend to have affinities for their targets in this kind of very low KD range. Why that is will be apparent to you in a few slides. Okay. So this is one way of describing noncobalant interactions. I next have to tell you how we're going to be describing speeds of reactions, the kinetics that make reactions possible. Two kinds of reactions that we're going to be seeing in this class. Kind number one are unomolecular reactions. These are reactions where you have some reactant and it goes through a transformation and that's it. There's no other species that's implicated in the reaction mechanism. That's it. It's just this one tetrahedral intermediate falling apart, collapse of the tetrahedral intermediate. The rate of unomolecular reactions equals some rate constant, little tiny K, times the concentration of the starting material. Okay. This K over here, I'm emphasizing tiny K for a reason. The little K indicates rate constants, that's totally different than the equilibrium constants I was showing you on the previous slide. Never the twain shall meet. They're two totally different things. It drives me crazy though that they're both symbolized by K and there's nothing I can do about that. Okay. We're stuck with that. It's old-timey nomenclature. All right. The next one. We're also going to see bimolecular reactions. These are reactions that have two reactants that are colliding with each other and in that collision resulting in formation of a new product. The rates of these bimolecular reactions are going to be equal to some little tiny K, the rate constant, times the concentration of reactant 1, in this case hydroxide, times the concentration of reactant 2, which we're calling Y in this case. Makes sense? Okay. I expect that this has been a review. This is something you've seen before. In biology though, these rates vary enormously. Okay. Check this out. This rate, constant K1, ranges from 10 to the 13th per second to 10 to the minus 7 per second. That's 20 orders of magnitude difference in speeds. Okay. This is at the wild, you know, fast end of the scale. These are things that are a total blur and at the super slow end of the scale, these are things with geological times that are so slow they simply don't even matter in biology without some sort of catalyst to speed it up. Okay. So these are the parameters we're going to use. Let's now think about kinetics first of non-covalent interactions and then we'll talk about enzymes. Okay. So for a non-covalent interaction, you can imagine the ligands hopping off of the receptor. When that happens, there will be some speed of this that will have a rate constant of little k off. Similarly, ligands can hop on to the receptor. And again, there's a rate constant little k on. And naturally, if this is at equilibrium, then you can actually, you can work out that the kd equals the ratio of the k off to the k on. Okay. Everyone still with me? Again this is at equilibrium. Where the, let's see. Okay. Everyone still with me? Okay. Good. Here's the thing. The typical rates of binding are again wildly different. And this is a table from the book, table 6.1. I want you to take a moment to just gaze at the truly awe-inspiring nature of this differences in speed here. Okay. So let's just take a moment to appreciate this. What I'm showing you is a series of different receptors and over here, a series of different ligands. At the top, these are small ligands. These are small molecules like biotin that we saw earlier. And at the bottom, these are large ligands. Okay. Now check this out. This is really cool. Notice that the on rates for all the small ligands, roughly the same. Very, very little difference. They're, you know, they're all right in that 10 to the eighth range. And hey, guess what? 10 to the eighth is kind of near the speed limit. The speed limit for zooming through the cell is going to be somewhere around 10 to the ninth per molar per second. Okay. That 10 to the ninth is a physical constant. You can't bounce through water any faster than that. And so these small molecules are, are zooming along about as fast as they possibly can to fit into their receptors. But check this out. Off rates. These off rates vary enormously. They range from say nine over here up to 100,000 over here. And down here, and the big things, huge changes in, in off rates as well. Okay. So what this tells us is if you are trying to design the perfect pharmaceutical, the perfect therapeutic to treat, I don't know, muscular dystrophy or something, you want to spend a lot of time thinking about off rates. Off rates are where the big money is when it comes to therapeutics, when it comes to pharmaceuticals. They all have roughly the same on rates. What differs is off rates. And those off rates are how you can determine whether or not a pharmaceutical can be given at a low dose versus a high dose. But I'm getting ahead of myself. We haven't talked about dose yet. Okay. Let's talk about the large ligands. Large ligands have enormously variant on rates. You know, there's too many zeros here to count. And then over here, enormously variable off rates. You know, this should make sense to us just intuitively. Because large molecules aren't going to be able to zip through the cell as quickly as small molecules. Right? They're going to get, you know, sidetracked. They're going to try to bind to other things. They're going to, you know, their diffusion rates are going to be slower, for example. Okay. Now, let's put it all into effect. Okay. Let's put everything we've seen so far into one, you know, summary that tells us what it is that we care about in terms of treating patients. Okay. So, the way it works is what we want to do is we want to have some biological response. Okay. If our goal is to cure patients of, say, toenail fungus, then our biological response is going to be, you know, what percentage of their toenails are clear from the fungus. Here's the way this works. On the y-axis, this is the percent biological effect. That biological effect results from ligands binding to some receptor. Okay. We've seen, for example, antibiotics that are binding to the ribosome. In that case, the ribosome would be your biological receptor, and the biological effect would be death of the cell that would be killing the bacteria. Okay. And so, in general, we see sigmoidal biological responses when we look at receptor ligand binding. Okay. When it's graphed as log of the concentration of drug along the x-axis and percent biological response at the top. At the very top, at 100 percent biological response, this will take a very high concentration of drug. Okay. Notice that the numbers are bigger on this side and then smaller over here. This is 10 to the minus 9 over here, 10 to the minus 1 over here, and concentration in molar. Now, if you realize this axis is a little confusing, bear with me. There's a reason we did it that way. Okay. So, up here at a really high concentration, 100 percent biological effect. Okay. But maybe at that concentration, you end up with a drug that has to be given with pills that are like, you know, the size of, you know, erasers or something, and no one likes to swallow things that are really big. So, we compromise. Instead, what we want is we want to have a 90 percent receptor occupancy in vivo to see some sort of effect. Okay. That's our goal. So, we're going to be measuring biological potency through these dose responses effect. And the major goal of pharmacology is to get up here into this 90 percent receptor occupancy where you get 90 percent biological effect, greater than 90 percent biological effect. Now, typically, the numbers up here, you know, obviously we're approaching an asymptote. So, things are, you know, can go over a really long time up here. So instead, we describe biological potency in terms of an effective concentration for 50 percent effect. That 50 percent effect takes place right here at the point of inflection for this sigmoidal dose response curve. And so, we compare two different drugs just by comparing the EC50 where the more potent drug will be the one that gets the same 50 percent response but at a lower concentration. Right? It means then that the patient can be treated with a lower dosage to get the same effect and the same benefit. Okay. So, you know, I think it's worth us taking a moment to talk about this because like 90 percent of the students in this class are going to be spending the rest of their lives battling with this sigmoidal curve and trying to get up here and sometimes being down here. Okay. Let's talk about how you measure biological response. Often times in chem bio laboratories, this is measured using a ELISA, enzyme linked immunosorbent assay. And before I can tell you a little bit about how that assay works, I need to tell you about some reactions that are catalyzed by enzymes. There are two enzymes that are kind of the work courses for chem bio laboratories. One of these is called peroxidase. Other one's called thosatases. These are reliable enzymes that will catalyze reactions that lead to turnover of dye molecules. So here are two molecules up here. This is these two aromatic molecules. These guys up here are clear. Okay. So if you made a solution to these guys, it would look more or less clear. It might look a little yellow but more or less clear. However, after the enzymes, these two enzymes catalyze these reactions, what ends up happening is you get a dye that has a deep color, a very strong color. This one forms a dark black, actually more brownish color, very dark color. This one over here forms a bright yellow color. So both of these give us reliable indicators that we can use to follow how much activity is taking place at a certain dosage. So let me show you how this works. What we do is we use plates and I think I've talked about those before. They're called the lysoplates in colloquial in the lab. And these plates have 96 wells on them. Okay. So that's over here. And each one of these wells can be coated so that the surface is coated with the receptor. Okay. So here's the receptor down here. The problem is if the surface likes to bind receptors, it'll also bind to the ligand and we don't like that kind of thing. So what we do instead is add a blocking agent, typically something like dried milk, non-fat milk, not dry. It's solubilized and so we take non-fat milk and coat any other place on the surface of this well that otherwise might start to bind non-specifically to ligand. We then add the ligand. It stuff binds. We wash away the non-binders and then add an antibody against the ligand. So wherever the ligand is bound, we're going to get an antibody stuck to it. This antibody is a special antibody. Unlike the antibodies I showed earlier, this one happens to be covalently tethered to the enzyme that I showed on the previous slide, that enzyme is peroxidase. And so this means when you add the dye, this peroxidase goes to town and turns over the dye in this well and then you can look at a large number of wells to get a dose response curve. This works really well. This is totally robust. You can use this in your proposals. It works great. Okay. Let's talk next about other receptor ligand interactions. So I've been showing you ELISA's as an example of catalytic receptor ligand interactions. I want to get back to Streptavidin. So Streptavidin has this remarkable half-life of 200 days. Recall that this is the protein that is found, an analog is found in egg whites and if you live on a diet of egg whites, eventually you're leaching all of the biotin out of your body. This half-life over here of 200 days starts to make sense, starts to explain why it is that you can actually leach all of the biotin out of your body. This KD happens to be an astonishing 10 to the minus 15th molar or Pico molar, sub-Pico molar really, femtomolar would be 10 to the minus 15th. So sub-Pico molar KD, that's extraordinary because earlier I showed you Rapamycin and FK506 and I said that nanomolar affinity was really high. That's a really great binding partner. In this case, this is one that's a million times better and is really strong. Now, chemical biologists have learned to use this for all kinds of assays. One thing we do pretty often is attach biotin covalently to small molecules and use this kind of lures for fishing. So you all know about fishing, right? So the way this works, you have a line, you throw the line in the water, at the end of the line is a hook, the fish are smart. The fish aren't going to eat some random hook. So instead, we'll put some sort of lure on the hook that's then going to bring the fish up to the hook. And classically, I guess this was worms, I don't fish with worms, I fish with flies. But I also like to fish with small molecules and when I do, I always use biotin. So here's the way this works. So here is the biotin, it's now covalently tethered to some small molecule, this becomes the lure. So this side over here is going to track proteins from the cell and then this side over here is going to be attached, is going to act as a handle. So that's the part that you grab onto and hold. And where this is really useful is if you have some new small molecule and you don't know what it binds to in the cell. This happens to us all the time. We'll have some molecule that we pulled out of, I don't know, a fruit flies screen where you're looking for molecules that make fruit flies less drunk or something like that. And we want to know, how does that work? So the way you would do this is you'd have some linker and then biotin and then you basically go fishing and hope for the best. A quick word about the linker. The linker matters a great deal because you remember earlier I told you about the trapdoor and how the neighboring streptavidin subunit slips over the one beta barrel and traps it. Without this linker, it's very hard for these molecules to be strongly held by biotin. The linker allows the trapdoor to close all the way and get wedged tightly shut. Without this, then some molecule like this that has this ring system nearby would basically get into the trapdoor and block it and make it harder for it to close all the way. So the linker's actually mattered quite a bit. Okay, in practice, here's what it looks like. In practice, we have these columns that we flow cell lysates over and what we're looking for is we're looking for molecules that will stick to the column that are binding to, not to biotin, but to the product, the target that's then tethered to the biotin. Okay, so this is the way we go fishing. The handle is over here, that's the biotin. It's grabbed onto by streptavidin and then the lure is hanging out and you send through junk from the cell to cells that are chopped apart and you hope that stuff sticks. Other molecules that don't bind to the target flow through and don't stick. And again, this works great. So everything you know about KDs are now being applied. Okay, so now what I've shown you is I've shown you an example of a really strong receptor ligand interaction. I've shown you how to measure biological effect. Let's now zoom in and start talking about catalytic receptors. This is a great example of a non-catalytic receptor. Enzymes are basically catalytic receptors. Everything that we've been discussing in terms of dissociation concepts, in terms of binding, in terms of molecular architectures, all of that comes into play when we discuss enzymes. So what do we talk about when we talk about enzymes? We like to talk about these in terms of a few simple parameters. And again, these are parameters that are familiar to us from our earlier discussion. So earlier I talked about receptor ligand interactions having kaons and k-offs. Similarly, enzyme substrate interactions are going to have kaons and k-offs. They can either form the complex, the enzyme substrate complex, sometimes called the macillus menten complex, or they decide not to. And the one little twist here is the fact that the enzyme is going to catalyze transformation of the substrate to some product. Okay, substrate is a fancy word for starting material. And what is happening here is we're going to form this intermediate complex, and this intermediate complex is going to very quickly collapse. If it collapses to the left, if it collapses to the right, then it is going to then form a product. And this product will, you hope, quickly dissociate. If it collapses to the left, then the substrate diffuses away, and this goes through an off rate. Okay, a quick word about this. In this case, I'm showing that the enzyme product, the product then has to dissociate from the enzyme. If that doesn't happen, then the enzyme is trapped and inhibited in this state. And guess what? That actually happens. And in fact, it's one of the reasons why oftentimes this last step can be a rate determining step for enzymes that you would not expect this to be a problem for. All right, let's zoom in and take a look at the reaction coordinate diagram for an enzyme catalyzed reaction. I will tell you in advance that this is grossly oversimplified. The real reaction coordinate diagram is complicated and messy. And I'm going to show you the theoretical idealized version first. Okay, so, enzymes work by stabilizing and thus lowering the energy of the transition states. Doing that accelerates the reaction. Okay, so here's a typical reaction on the y-axis. This is the change in energy, a delta G. Higher on this axis means less stable, lower on the axis means more stable. And as you know, higher means it takes more energy, which means then in turn it's slower. On the x-axis, this is oftentimes called the reaction coordinate. It's nothing more than the conversion, the pathway between enzyme plus substrate going all the way to enzyme plus product over here. Okay, and here it is going through a couple of different intermediates. Some substrate intermediate, enzyme product intermediate, and then also a transition state. Okay, so, in order, so let's talk about the uncatalyzed reaction first. Uncatalyzed reaction in blue, substrate starts off over here, gets converted to product. That takes a lot of energy. Okay, the enzyme in this case is uninvolved. That's why it has a plus sign. It's kind of hanging around as a spectator. Let's just imagine that would happen. So if that happens, there is this activation energy here, the difference in energy between the transition state and the starting material is very high. We can actually derive the speed of the reaction from knowing that energy, right? And we know a bigger energy over here, higher activation energy means a slower reaction. And oftentimes these reactions are too slow to be biologically useful. If we relied upon spontaneous reactions taking place, you wouldn't be a human. It wouldn't be possible for biology to take place. Instead, what happens is enzymes stabilize, bind to the substrate, lower the energy of the substrate complex, and in turn, most importantly, lower the energy of this transition state. The red arrow here is exactly how much lower the energy is. The bigger that arrow is, the better the enzyme, right, because that's how greatly improved it is. Similarly, the enzyme then binds to the product and then eventually dissociates from the product. The product dissociates. Okay, let's take a look at an example. The example is first a unimolecular reaction. I realized this is kind of a hairy example. Bear with me, it's worth it. So in this case, what's nice about this example is this reaction is a concerted reaction, meaning it goes smoothly in one fell swoop. There's one step in this reaction mechanism. This is a parasyclic reaction and it involves the transformation of Charismate on the left to Prefinate on the right. And again, the enzyme is called Charismate Mutase. And by the way, this is a key step in the Chachemic Acid pathway, which all the plants on this planet rely on to produce phenylalanine and the aromatic amino acids. This is one of those things that we humans could not exist on the planet without. We cannot do this reaction. Plants and microorganisms are the only organisms that have the necessary enzyme to catalyze this reaction. And for that matter, the only organisms that have this Chachemic Acid pathway, which is why indirectly you must eat plant material to survive. Okay, so check out this reaction mechanism. This is a butte. This is awe-inspiring. Okay, so in this reaction mechanism, electrons are going to bounce down here, bounce, bounce, and bounce some more all the way up to here, producing a new carbon-carbon bond between this carbon over here and this carbon. These guys are going to join together to produce this new carbon-carbon bond that has the wedge coming out towards us. Okay, so in order for this to happen, this part of the reaction mechanism has to be sticking out towards us. And this whole thing is just going to swing over the top and all six electrons are going to be flying at once. This is a fury style to give you in a neat swoop this new carbon-carbon bond while breaking apart this old carbon-oxygen bond and breaking a carbon-carbon double bond to form a new carbon-carbon double bond. This is really spectacular stuff. This is the stuff of just, you know, graceful electron choreography in motion. Okay, let's take a closer look and see exactly how this works. Oh, and by the way, I think if you try to do this reaction in lab, you can get it to work, but it's going to be slow. It's going to be a dog. So check this out. This is the way the enzymes do it. This is the way they make this happen. What they do is they grab on to the handle up here and force it into some semblance of the necessary transition state. And in doing so, they're going to lower the activation energy of this reaction. Okay, so here's what it looks like bound, or actually I'll tell you in a moment what this is. This is kind of like the transition state bounds an enzyme. And in practice, the transition state looks like this. Here in dash lines are the bonds that are being made and broken. And in order to get this reaction to take place, the enzyme has to grab on to this thing over here and basically force it over the top so that this carbon and this carbon are next to each other to form a new bond. If that doesn't happen, then no reaction takes place. And so what the enzyme is doing is just grabbing on and forcing this thing into the right configuration to allow the reaction to take place. Beauty. It's a true thing of beauty. Okay, now here's the way we know that that's how that happens. This is classic work done by my advisor for my undergraduate days at UC Berkeley, Paul Bartlett. And the Bartlett Laboratory synthesized an inhibitor of this reaction that looks like this. It kind of mimics this transition state. And guess what? This mimics the transition state so well that it sticks in there and just plugs it up. It binds really well to this enzyme because the enzyme evolved to bind to this transition state. That's how it works. And so since enzymes bind transition states with highest affinity, this strategy could potentially yield the best inhibitors. Binding to transition states is the same thing as saying you're going to catalyze the reaction. You're lowering the energy that's required for the reaction to take place. And here we're seeing a dramatic example of this. By binding to this transition state, you're forcing the substrate to get its carbons in the right location for this reaction to take place. And if you don't do this, the reaction in there, the stuff is just swinging all over the place. And if it's swinging all over the place, it's finding lots of other stuff to do with its time, and the reaction never takes place. I have so much more I want to tell you about. When we come back on Thursday, I have something really cool that I've been saving up for a while. I want to talk to you about how it is that enzymes work, not just in terms of binding transition states. By the way, this is kind of the classical stuff you learn in biochemistry. It's not wrong. It's actually totally right or else I wouldn't say it to you. But I want to talk to you about the other aspect of enzymes, which is enzymes also have motion. They actually are going to be physically forcing together these transition states to lower the transition state energy and in turn accelerate reactions. And to me, that's really one of the most extraordinary aspects of enzymatic catalysis. So let's stop here. When we pick it up on Thursday, we'll be talking about motion as a mode for accelerating chemical reactions.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Instructor: Gregory Weiss, Ph.D. Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:15:32 Protein Conformation 0:18:28 All B-Sheet Proteins 0:27:13 WD Proteins Scaffold Together Large Assemblies 0:28:12 Collagen is Formed from a 3-Stranded Coil 0:29:21 All a-Helical Proteins 0:31:40 a/B Proteins 0:33:21 Peptide Binding Domains 0:39:05 Higher Order Assemblies of Proteins 0:41:23 Equilibrium Constants to Describe the Strengths of Non-Covalent Interactions 0:46:54 Following the SPeeds of Reactions 0:49:25 Rates of Non-Covalent Interactions 0:50:16 Typical Rates of Binding 0:53:09 Measuring Biological Potency Through Dose Responsive Curve 0:56:26 Measuring Biological Reponse by ELISA 0:59:36 Steptavidin-Biotin Offers Near Covalent Binding Affinity 1:00:39 Biotinylated Reagents Used Extensively in Chemical Biology 1:03:47 Enxymatic Catalysts = Catalytic Receptors
10.5446/18870 (DOI)
So we're going to pick up where we left off last time. Last time we were talking about protein structure. And we're going to start from first principles today, looking at the most basic elements of protein structure, and then kind of building from there as we get increasingly more complicated. Okay, some quick announcements. As you know, read chapter 5. Get ahead and start reading chapter 6. We'll be working on that chapter next week. There will be a midterm, which is two weeks from the past Tuesday, so about a week and a half from now. There will be a midterm, the second midterm. And that will cover through the end of chapter 6. Okay, and I'll mainly focus on the more recent material since the last midterm. And I believe that's like chapter 3 through chapter 6. So it's sort of the last half of chapter 3, the last little quarter, third of chapter 3, and then on through the end of chapter 6. Okay, that's what the midterm will cover. Let's see. Today is the day that we have assignments that are due. And so don't forget to turn those in at the end of the class today. Just leave them over here or hand them to the TAs. If you forget, drop them by my office. Okay, questions about where we're going, that sort of thing? Okay, let's take a quick look then at office hours. I have office hours today, immediately after class. Miriam has office hour tomorrow. And next week we're back on Tuesday, et cetera. I expect my office hour next week, the floating office hour, will take place probably 2.15 to 3.15. Let me just make that change just so that you know. Okay, so that's also going to be the next week, office hour as well. Let's see. This would be a good office hour to come by to talk about your proposal. Recall that the abstract to your proposal is due next Thursday. Week from today, right? Okay, so week from today you're going to hand in something that's five sentences. I stopped myself just in time. Okay, so let's talk very briefly about the format for those five sentences. Okay, this is what I'd like to see in your proposal abstracts. Okay, and. Okay, so the proposal abstract needs to tell me two things. Okay, so let's say, yeah, so those two things are your idea and two, how it fits the definition of chemical biology. I would say if you're submitting a proposal to the National Science Foundation, how it fits the definition of chemical biology, not so important. But for Chem 128, it's really important. Okay, because I know everyone in this classroom is a creative individual who has lots of great ideas, but the vast majority of those ideas aren't in the area of chemical biology. So I'm interested in your chemical biology ideas. Okay, so here's what I would like to see in your proposal abstract. It must cover these two topics. And specifically, let me show you the format. Natalie, is this color showing up okay or is it a little dim? I'll get something darker. Okay, okay, so abstracts are at the start of all scientific communications. The abstract is also known as the executive summary in business communication. And it's a very, very short distillation of the key ideas in a paper. And so the goal of the abstract is to provide those ideas and then secondarily, to hook the reader. A good abstract should convince the reader that they want to read the rest of the paper. And a great example of this is PubMed, right? When you do searches on PubMed, it turns, you get these abstracts that are turned up and I don't know about you, but if the abstract doesn't look very interesting or the abstract isn't covering the ideas that I'm interested in, I don't bother reading the paper. I'm sure you're doing this as well, right? If you're not, then boy, you're reading a lot. Okay, so abstracts are absolutely crucial to communicating effectively with audiences. And again, in the business community, this is called an executive summary. And again, it's absolutely essential. So for anything you're going to be writing after you graduate, there's a good chance that you're going to need an abstract. The abstract in this class should consist of the following sentences. Okay, so we'll call this the abstract format. And in general, I'm expecting something that's five sentences. If it happens to be more than five, that's fine. If it happens to be a little less than five, like four, that's probably okay as well. But don't plan for something that's like 10 pages. Okay, don't hand me the whole proposal. The goal here is that I can provide you a little bit of feedback. Okay, so the first, and I'm going to just number these. Let's just say number of, numbered, we'll call this numbered sentence. Okay, numbered sentence. Okay, and again, you don't number your sentences, but I'm going to just give you what I expect each sentence to look like. Okay, so sentence number one is kind of states the problem. Okay, so diabetes affects 2 million Americans per year, sometimes with grave consequences. Okay, that kind of states the problem. The big picture of the problem. By the way, I just made up that statistic about diabetes. I don't know what the real number is. I'm just making that up just to give you an example. Okay, so first sentence states big picture. Okay, and I really mean big picture of problem. Big picture problem. Okay, so start off big. This really should be something large. If you're thinking about, I don't know, a better cosmetic or something, you know, a better anti-wrinkle cream, that might be nice. But honestly, it's not going to fly really as a proposal. Like if you want to come up with a better anti-wrinkle cream, see me and we can talk about how to like hook up with a venture capitalist. You can get money for that kind of thing separately. You won't submit proposals to the National Science Foundation because they're not going to fund that kind of stuff. What I'm looking for is I'm looking for things that will either increase our understanding of the world around us or solve some problem afflicting the human condition on this planet. And when I say that, I mean that broadly. Okay, so for example, if you want to solve, you know, some disease that only affects mountain gorillas, that could be important, right? Because we humans have a stake in ensuring the biodiversity of our planet. So that's a big problem. I'd like to hear about that as well. Okay, so I'm not going to confine it just to human disease. I'll say, you know, chemical biology broadly. But at the top, at the outset, in that first sentence, you have to show me what the big picture problem is that you're going to be targeting with your idea. And notice that you don't lead off of the idea. You're actually starting by kind of setting up the story. Okay, and in fact, you should be thinking about this along the lines of storytelling. Good proposals are kind of like selling a story. And so the first sentence is kind of setting the scene, setting the dramatic mood, and telling us why it is that we should be following along with the next part. Okay, so that's sentence one. Yeah, any question over there? So if we have like a stats like the DB stats you gave, do we have to cite it in our abstract or later on in our paper? Yeah, you could do both. Yeah, so oftentimes, and I forget your name. Rideen? Yeah. Okay, so Rideen's question is, what if I want to cite those stats that appeared in the first sentence later in the paper? And that would be perfectly acceptable. Oftentimes there's some overlap between the abstract and the rest of the paper. And that's normal, right? Because remember the abstract is a distillation of what's in the paper. So some overlap is even expected. Okay, second sentence is kind of focuses, focuses reader on your aspect of the problem. Let's say on specific aspect of problem. Okay, so the first sentence kind of gets you in the door, right? That first sentence is the one that says, there's this huge problem out there. The barbarians are at the gates. Second sentence says, and here's what we're going to do to counter the barbarians. There's this back gate that I know about, and I'm going to reinforce it. Okay, obviously you won't be writing about barbarians, but that's a proposal I can think of at the moment. Okay, so focus the reader on a specific aspect of the problem. So first sentence was the thing about diabetes as you know, afflicting millions of people. Second sentence says, there's a target for diabetes, a relatively unexplored target for diabetes called, I don't know, Hawks A, offers new opportunities for controlling this terrible disease. Okay, I'm making this up as I go. All right, so now I know that the next part of this is going to be something about Hawks A, and I'm going to be looking out for Hawks A, third sentence, third sentence now focuses on your specific idea. Okay, so third sentence is where you're going to come in and say state idea as hypothesis. Okay, now I'm not going to ask you to write a sentence that says my hypothesis is X, and I will test this by doing Y, but you should have something that's a hypothesis. Do you all know what a hypothesis is? Do we need to talk about hypothesis? If this word hypothesis is unfamiliar to you, you must look it up, you must become familiar with it, because I will be looking for a hypothesis. Your ideas have to be hypothesis driven. Okay, now I will tell you, now I'm really bearing my soul, 90% of the stuff that I think about and I want to work on is not hypothesis driven. I like the kind of science where you go out and you explore something that you don't know what you're going to get. Okay, I like fishing. I love throwing a line in the water and not knowing what kind of beautiful fish is going to get snagged by that line. That absolutely fascinates me. I will do that time and again for, I don't know, hours per day and miserable conditions, because I just love the thrill of the venture of not knowing what you're going to get. But I have learned to frame my fishing expeditions in terms of a hypothesis. I have this hypothesis that this particular area of the lake is going to be an effective one for fishing because there's an outflow of water into that particular area of the lake and that's where fish like to gather to, or let's say inflow of water. Fish like to gather there because there's going to be abundant food in that particular spot. Okay, so if you have an idea that you cannot know in advance really what you're going to find, you still frame it in terms of I'm going to be doing this, but I have a hypothesis that the way I'm doing this is going to be more successful. All right, and let me give you an example of this. It's less abstract like fishing and more concrete like aptomers. So, previous sentence I set up this whole thing about Fox A. So, RNA-aptomer libraries have emerged as a powerful technique for studying proteins in the cell and I want to combine aptomers together with studies of Fox A to explore Fox A as a target for diabetes. Okay, so I have a hypothesis that if I could discover a binding partner to Fox A, then I will be able to do something about this terrible disease called diabetes. Okay, and so I can't tell you in advance the design of that aptomer. I'm just going to make a whole library of aptomers and it's like fishing. I'm going to just throw them all against Fox A, but I suspect that this is going to be, this is going to work. Okay, so my hypothesis is within this library I could find something that's going to bind to Fox A. And furthermore, from the library design, I'm going to set it up so that we're going to be more likely to be successful. Okay, so there has to be a hypothesis embedded in your logic of why it is that you want to do what you want to do. Okay, but you don't necessarily have to use the word hypothesis. Okay, next sentence. Next sentence is how it fits the definition of chemical biology and you must have a sentence that says this in your abstract. Okay, so this, and the sentence must say this idea fits. The definition of chem bio because, and then I'll leave a line here. Okay, this is essential. Your abstracts must all include a sentence that says exactly this. I've been doing this, I've been teaching this glass for a long time, I think it was the year 12. And I've learned that if I don't have a sentence like this, I will get all kinds of unfocused ideas. Okay, and so, and I know from talking to some of you about Boker for it, so this is still kind of a mysterious idea. But review the definition of chem bio and make sure that your idea squarely targets that definition of chem bio, which is using techniques from chemistry to understand biology at the level of atoms and bonds. Okay, makes sense? Okay, let's talk about the most important sentence of any proposal. This is the sentence that if it's a good one, this will get you funded every time. And I guarantee to you this trick works not just as an NSF, not just with funding agencies, this will even work with your parents. Okay, and those are your major funders these days, I know. Okay, so let's talk a little bit about the last sentence. This last sentence is one that I like to call the payoff. Okay, in short, the payoff sentence is if this idea hits a home run, here's what's in it for you. Here's what is you or meaning society is going to gain from this. If every expectation that I make, if every hypothesis that I propose turns out to be correct, here's what you get to benefit from. Okay, so this is the one where you imagine a home run now deliver. Okay, what happens, what results? So imagine a home run and then what results, all right, sorry this is getting a little jammed together, I want to all fit in on one point. Okay, makes sense? So every abstract is going to end with the payoff. This is easily the only sentence that really matters in proposals because your proposals are going to be read by a large number of people. I'm not talking about Chem 128, I'm talking about later when you go up in front of a research review board and you're asking for more funding for your, I don't know, oil development team or something like that. That last sentence, what's in it for the reader is really the one that gets you funded, okay, because oftentimes proposals appear in front of people who don't really understand them. Okay, so proposals have to appeal to broad audience and typically proposals are read by groups of 20 people. There might be 20 people in front of you between you and the check. And those, many of those 20 people are trained in other areas. They're really smart but they might not know very much about FoxA. They might not be able to evaluate whether or not FoxA is a good target for diabetes research. However, they definitely need to be able to understand this payoff, okay, and it really has to follow that it's going to be useful, okay. So you don't want to promise the moon if you can't deliver the moon, okay. So a payoff sentence for our hypothetical proposal might be something like inhibiting FoxA with an optomer could provide a new mortality for decreasing surges in blood sugar amongst diabetic patients, okay. And that would be useful, right. That would be extremely important potentially for diabetes treatment, okay. But I'm not going to say it's going to cure diabetes if I can't deliver a cure for diabetes. If you promise, you know, a cure for cancer or some, you know, disease, you better be able to back it up or else the reviewer reading this is going to start holding her nose or his nose, right. Because they're going to be like, where's my cure for cancer? I thought you promised me a cure for cancer. I don't see a cure for cancer. This might be nice and all but I don't see the, anyway, right. You get the idea. So you have to be able to deliver what you're promising. But on the other hand, you don't want to under promise because this is really the part where the, you know, the person at the other end holding the big bag of cash decides whether or not they want to invest in you, okay. It makes sense. Okay. This is totally formulaic. The formula I'm giving you, this abstract format, works for all kinds of proposals. It works for proposals that are going to appear in front of venture capitalists. It'll work in front of proposals that appear in front of your parents. It will work for any sort of group of foundation or group of people holding money. I know because this is the formula I use, okay. And I want you to do, I want you to follow this formula as well, okay. For this class, just try it. Trust me, it works. Other, any questions about this abstract? Yeah. And now you mentioned that would be a good idea for a journal report to kind of correlate with this. So since the ideas behind probably sometimes one, two, and five are going to be really similar, how can we like use those ideas without plagiarizing them? Okay. Great question. And remind me of your name. Jasmine. Jasmine? Jasmine. Jasmine, very, with a Y. Okay. So Jasmine with a Y is asking, what happens if my background is very similar to the background that appeared in the proposal or not the proposal in the journal article? I would say find another sentence one, the big picture sentence. That big picture sentence has to be your interpretation of why you think it's important, not why some other scientist working at Johns Hopkins thinks it's important, okay. So you have to be spinning this to fit your own interests. Okay. So for example, if the journal article starts with the sentence, two million people are going to be afflicted with diabetes this year, then maybe you want to say 200,000 new cases of diabetes are going to be diagnosed in the next month. You know what I mean? So now you've turned it around, right? And you focused on something that you think is particularly important. And actually now that I think about it, if your target is this thing that's going to moderate blood sugar, maybe you want to say, you know, 10,000 diabetics are going to have to have amputations of their limbs because of complications from diabetes, right? So you're basically reinterpreting this big problem and focusing on some aspect that you think is important because it's your ideas that I care about, not somebody else's ideas. And along those lines on the payoff over here, I want your ideas for where the payoff is going to be useful. Okay? Why is this going to be helpful for someone reading this who might consider funding it? Okay. Okay. Final thought, that was a great question. Other questions? So I've mentioned this before in an email and I just want to reiterate it. The very best proposals from this class, I will submit to the campus writing coordinator and I've been pretty successful at convincing the campus writing coordinator that my students are really extraordinary writers. And so I've been really successful at getting students from this class awards, writing awards, which is nice. You get to add that to your CV. I think they even cut you a check. Okay? So it's a rare time that the Regents of the University of California will give you money if you're successful. So I'll be on the lookout for the very top two or three proposals coming from this class because I really contend that the top two or three proposals from this class are good enough that I can put in front of the National Institutes of Health and I bet they would get funded. They're that good. So now, okay, this also reminds me of something. Every year when I get those teaching evaluations back, someone says, Professor Weiss is just fishing for new ideas. I promise you that is not the case. Okay? So my lavatory is stocked with ideas for the next 20 years. You can ask Merriam or Krithika. They will assure you that I'm always driving them nuts with some crazy idea I think of on the way into work and so there's no way that no matter how brilliant you are that I'm going to be scooping up your idea and then, you know, running into the lab and be like, you got to do this thing. I just read it here. Okay? So don't worry about that. Give me your best ideas. Show me your best ideas. It's a bad strategy to hold back your best ideas because you're afraid someone is going to scoop them. Okay? Ideas are a dime a dozen. If you're smart enough to come up with good ideas, you're good smart enough to come with one good idea, you're smart enough to come up with a dozen more good ideas. Okay? So give me your best idea. Okay? And don't worry that someone's going to end up scooping you. If you do, you get to claim some status, right? You can say, well, I had that idea 10 years ago. Back when I took Chem 128 with that crazy guy and, you know, here's my report. I got an A-minus on it. So, you know, maybe that person goes on to win a Nobel Prize and you look cool because you thought of the idea first. Okay? But don't hold back. Don't worry about getting scooped or anything like that. The truth is any proposal that you do, you're going to have some situation like that and if you're starting a new company or something like that, sometimes you sign non-comp... you sign confidentiality disclosure agreements, CDAs in advance with the people you're disclosing ideas to. But honestly, in science, especially in academic science, we're constantly talking about ideas even with my closest competitors. Okay? My closest competitors. I will tell them exactly what we're working on. Maybe not exactly, but I might hold back some key details, but I'll certainly tell them the general area that we're going to be fishing in, right? I'll be like, yeah, we're going to be on this part of the lake. I might not tell them exactly what lure we're using or kind of line, but I'll tell them what we're going to be doing, all right? So, I want you to do the same. Don't hold back. Give me your best stuff. I cannot wait to read these. It's one of the real rewards of the year. I love hearing about your creative ideas. It really is invigorating. It's really deep. So, anyway, I'm looking forward to that next Thursday. Any questions about the assignment? Anything like that? Okay. I will get those back next Thursday, and then it will take me a week or so to process them. I'll be in Brazil from Friday onward of next week, and so there'll be, you won't hear anything from me for a few days. Don't panic. I'm reading them while I'm on the beach in Brazil. Just kidding. I won't be on the beach. I'll be in meetings. But I will be reading those for abstracts on the plane, and by the time I get back, I'll have them all commented on for you. Okay. Yeah. I just had a good question. Is there any effective way to make sure that our idea is original? Yeah. Actually, I'm so glad you asked. Okay. So, the easy part of this assignment is thinking of the creative idea. That's the easy part, the Eureka moment. The hard work is where you're digging into the literature to see if someone else has already done that idea, and it's essential that you do this. You must do this. So, what I do is when I think of some idea, the first thing I do is I run, well, I no longer have to run to my laptop. I pull up my cell phone, and then type it into Google and do a quick Google search and see what else has been done in that area. And then I'll do PubMed searches, and then I'll change the wording around, and I'll do some more searches. That's the hard stuff. Okay. So, thinking of the idea, that's like 5%. The much harder stuff is doing all the background reading to make sure that it is original. It is essential that you propose an original idea. If it is not an original idea, I will give it back to you ungraded. Okay. And I'm going to ask Krithika and Miriam to do Google searches of everyone's idea. Okay. They will do a quick Google search, and they will tell me if it's not an original idea. If it's already been done, it's going to be returned to you ungraded. Okay. And that's not good. That means you have to start from the beginning. Okay. So, it's really important that you do that. Thanks for asking. Any other questions? Yeah. Once we submit these abstracts, are we allowed to read the morning at all once we actually read them? Absolutely. In fact, you will, in fact, it's absolutely mandatory that you change your idea based upon my comments, based upon new reading that you do, et cetera. And what I'm going to do is I'm going to give it back to you, but I'm going to ask you to hang onto it and then turn it back in with the proposal at the very end. Because I want to see the evolution of your ideas in response to my comments. Okay. So, I'm going to tell you, yeah, you know, this idea would be a lot better if you went in this other direction. Like, there's a new type of afterword called mRNA display. You should look into that. And so, I'm going to be looking for a proposal then that is responsive to that suggestion. Okay. And I'm going to give you points for being responsive to the suggestion or take off points for being unresponsive. Okay. So, yeah, there's going to be considerable changes between now and when the final proposal is submitted. And in fact, some of you are going to end up just totally chucking the first idea and coming up something new. And that's fine, too. Okay. Any other questions? Okay. Very good. Again, look forward to reading those. Let's get back to proteins and all things protein related. I want to, let me see. Let me just put down these things. Let's just quickly summarize what we saw on Tuesday. As I told you on Tuesday, I introduced you very, very briefly to the 20 naturally occurring amino acids. I'd like you to memorize their structures, their abbreviations, their names. We talked a little bit about how peptides can make effective pharmaceutical lead compounds. Furthermore, when they're cyclized, when they're N and C termini are joined together in a ring, to form a ring, the resultant cyclized peptide or cyclic peptide is amazingly stable even in the stomach, even in this very protease rich environment of the stomach. It turns out this is actually fairly generic. It seems to work really well. And cyclotides are actually emerging as an important pharmaceutical class of compounds. And we looked at how peptides can also be used as lead compounds to develop small molecule therapeutics. Okay. And the next topic was we looked at a technique called native chemical ligation for stitching together small peptides into much larger proteins. This actually works pretty well. It actually, this is a good technique. The nice thing about it is because the peptides are chemically synthesized, you can include unnatural amino acids pretty readily. And that allows hypothesis testing, right? If you replace, say, a hydroxyl functionality with a CF3 functionality, maybe you can test whether or not the hydroxyl is donating a hydrogen bond and you can look at issues like the fluorine-based hydrogen bond. Okay. So you can look at stuff in unique ways using unnatural amino acids that are introduced using chemical synthesis. And along the lines of chemical synthesis, we very briefly reviewed the carbodiamide coupling reactions that you learned about back in sophomore organic chemistry. And I suggested that if those were unfamiliar to you, you might want to go back and review. Okay. And then finally, we ended on talking about how protein splicing can result in the spontaneous removal of an intene using a very similar mechanism. All right. Any questions so far? All right. I want to go and talk next about, let me just skip on. I want to talk next about conformational analysis, which is trying to understand why it is proteins adopt specific configurations. Last time, for example, we learned about alpha helices and we learned about beta sheets. And I haven't really told you too much more about how it is that these form. What are the forces that are driving these structures into these particular conformations? And before we do, just a couple of more quick words about beta sheets. Beta sheets come in two flavors. They can be either parallel. So the strands are running from N-terminus on the left, C-terminus on the right, and on the left, C-terminus on the right. These are now parallel strands. Okay. So notice N-terminus is on the left, C-terminus is on the right. This strand is going in this direction, the next strand below it, going in this direction, going in this direction. On the other hand, more commonly, beta sheets can be found in anti-parallel directions. And I'm saying more commonly because if you think about it, there has to be a very long linker between the C-terminus on this side and the N-terminus on this side. All this gray stuff, that's really long. Okay, whereas in an anti-parallel fashion, the beta sheet can very neatly have a C-terminus at one end, a little linker that leads neatly to the N-terminus, leading to the C-terminus, and so on. And recall that N-to-C convention that we use to describe peptides. That also illustrates the directionality of these arrows. The arrows are going from N-terminus to C-terminus, which is how we read protein structure. Okay. Now, something that's interesting about this as well, notice that the hydrogen bonds are at slightly different angles between parallel beta sheets and anti-parallel beta sheets. It turns out that nature abhors flat surfaces. Flat surfaces are very, very rarely found in biology. More commonly, flat surfaces are curvy. And I'll tell you exactly why beta sheets are curvy in a moment. But before we do, let me just note that beta sheets fold up into structures, surfaces that aren't perfectly flat. So, very commonly, beta sheets will fold up into this beta barrel. Is this an anti-parallel or parallel beta sheet? Okay. Good. Anti-parallel, right? Because this one, the arrow is going down here, and the next strand is going up in the opposite direction, down in the opposite direction, up in the opposite direction, etc. And that's what we're going to call anti-parallel. Okay. So, this is pretty common. Beta sheets can form into these barrel-like structures. These barrels can be fit into plasma membranes, membranes on the surface of the cell. They also are used very commonly as binding proteins and even as active sites for catalyzing reactions. Okay. And stuff can happen either on the inside of the beta barrel, but also on the outside of the beta barrel as well. And, okay. Oh, here's an example of a parallel beta sheet over here on the right. Notice all the arrows are pointing in the same direction. Notice that it too is curvy. It's not forming a perfectly flat beta sheet. Instead, these things like to curve, and I'll explain that more in a moment. Even this one that looks relatively flat of an immunoglobulin domain, curvy. Right? Curvy. It's curving out slightly towards us. I realize it's a little hard to see. Okay. But on the other hand, you know, these beta sheets, they're called sheets, but they really look a lot more curvy than that. Okay. Questions about beta sheets? Okay. Okay. More pictures of beta sheets. I love looking at pictures of proteins. This, to me, it's like visiting a zoo or something. They're just so beautiful. Okay. So here's one. This is a nice side view of a beta sandwich. Okay. These are two beta sheets stacked on top of each other, like slices of bread. Notice how this forms this kind of propeller, like twist over here. This inside in here is not empty. The side chains are sticking in over here and over here, and these sheets are packing together to form a nice core with each other. The side chains are tickling each other from one sheet to another, and those side chains then form, pack together to form a core that consists largely of hydrophobic residues. We'll take a closer look at that in a moment. Okay. Oh, actually, it's right here. Okay. So here are the side chains of the beta sheet. Notice that the side chains are perpendicular to the sheet. This is due to something called a lilac strain, which we'll look more closely at in just a moment. Okay. So bear with me. In about three minutes, we're going to know, I'm going to tell you what a lilac strain is. But notice the consequence of a lilac strain is that all these side chains are sticking down perpendicular to the beta sheet. They're like leaves of grass that are, blades of grass that are kind of sticking down. And then this beta sheet down here has side chains that are sticking up to grasp onto those side chains up there. Okay. So the beta sheets tend to be curvy. They have the side chains perpendicular to the beta sheet, ready for interactions. Okay. One final element of secondary structure and proteins that I want to introduce you to. The turns. Turns are found at the ends of each one of these sheets, or each one of these strands. Okay. So here's a beta strand down here. This is a turn. And then it leads to another strand. So each one of these strands are connected together by loops and turns. And let's take a closer look. There's two kinds of turns that are found. A 180 degree turn called a beta turn. Okay. So in this case, one strand comes down here. It turns around 180 degrees. It heads back. The other turn is kind of like a right-handed turn. Okay. So it comes in here. And then it turns. And this is called a gamma turn. Okay. Sorry. This is, yeah, it's a right-hand turn. Notice that both of these turns feature one and only one hydrogen bond. There's other elements that are stabilizing the turn, largely from the strength of these interactions of the strands over here. And there's residues over here that critically staple together the turns. Okay. So you have two side chains that are interacting with each other, such as two phenylalanine side chains. Last time on Tuesday, I showed you how the Phi Phi side chain interaction was one of the strongest and the most overrepresented in the population of protein structure. So somewhere over here, it's likely that you have something like that between two phenyl groups of phenylalanine side chains. And so this one hydrogen bond is not the world's greatest of a combinator of this turn. But, you know, it's not so strong that it can force the turn to happen. But there's other residues that play more than a supporting role in assuring that the turn is happening. Okay. Now, here's the thing. Because there's only one hydrogen bond here, what do you think happens to all of those other hydrogen bond donors and acceptors? What do you think they're doing? Come. Yeah. Okay. Carl asked me to repeat the question. So you have one hydrogen bond so that has acceptor and donor tied up. But then you have all these other donors and acceptors. These guys are available to do things. What do you think they're doing? But would they be interacting with other proteins? Yeah. Chelsea suggested that these are interacting with other proteins. And in fact, that's why these turns are often found at positions that need to accommodate other binding partners. You have all these extra hydrogen bond donors and acceptors that are looking for business. They're hanging out there. They don't have anyone to hydrogen bond to. Maybe they can pick up some other binding partners. Furthermore, because the turn itself is set only by this one hydrogen bond, that means that the turn can be relatively flexible in the regions of secondary structure. This is the most flexible region of the protein. And so often times in areas of protein structure that have to accommodate large numbers of different binding partners, we find these turns. Because they can change, they can move around and be flexible enough to accommodate different sizes of binding partners. Yet at the same time, they have lots of hydrogen bond functionalities, hydrogen bonding functionalities that can then donate to the binding partners. Okay? And let me show you an example of this. This is at the interface between two proteins that are interacting with each other. And notice that there are a bunch of these turns or loops that are reaching up to touch each other. Okay? That's fairly typical. We have two binding partners that are interacting with these sort of loopy regions. Loopy regions are flexible enough to accommodate diverse binding partners. Excuse me, another example I found in antibodies. And we'll look at that at a moment. Okay? Make sense? Okay. Now, turns obviously require some amino acids, turns like these, require amino acids that don't mind being torqued quite dramatically. Right? If it's going to loop back over here, you need some amino acids that can handle that kind of big turn. Okay? And not all amino acids are so accommodating with that kind of thing. And when we look at trends in the distribution of amino acids, we find some amino acids are better at making these turns than others. And this also applies as well to beta sheets. The things that like to do turns not so good at making strands of a beta sheet where the peptide has to be an extended line. Right? If the thing wants to turn, not going to be so good in the middle of a beta sheet. Okay? And so for this reason, we can classify amino acids as helix formers, helix breakers, present in coils, etc. Okay? And let me see. Can you guys see this way back here? I wasn't sure when I put these slides together if it'd be visible. Oh, actually, you know, my prescription for glasses is working really well today. I could see that pretty readily. Can you see this as well? Okay, good. Okay, so the red numbers, larger red numbers are a higher distribution of amino acids in these different structures. Okay? So for example, alanine is found very commonly in alpha helices, glutamic acid, very commonly alpha helices. Other ones like glycine, not so much. Okay? Glycine might be found more in coils, beta sheets, glycine isn't so good because it's too flexible. Glycine is also found very commonly in those turns that I showed on the previous slide. Okay? So beta turns commonly have a glycine over here and a proline over here. So GP is a very common motif in turns. Glycine doesn't have a side chain. It can very readily bend and bend to accommodate this dramatic 180-degree turn. On the other hand, that bendability, not so good for beta sheets, also not so good for alpha helices and that's what you see in this example over here. Right? It has a number of.47. So it's a very, very low number of propensity. Okay, what else can I tell you? Notice that the amino acids that have carbons, beta carbons, not found so often in alpha helices. Alpha helices are kind of twisted or curled up and the beta functionalities past the beta carbon tend to run in to those coils of the alpha helices. So instead, these tend to be over, and the large side chains tend to be overrepresented in beta sheets. They tend to be more commonly found in beta sheets. So this is things like phenylalanine, tryptophan, tyrosine, these big aromatic residues, more commonly found in beta sheets. Okay, make sense? Okay, now don't memorize this table, but get ready to kind of prepare to explain some of these trends. Okay, a totally legitimate problem would be for me to ask you. So here's, you know, what is the structure of phenylalanine? Would you expect it to be more common in beta sheets and gamma turns or alpha helices and why? Okay, right, make sense? Okay, okay, I want to talk to you in further detail about that kind of conformational analysis and to do that, I'm going to draw on the board, okay? Okay, now, okay. So we're going to start off very slow. Okay, we're going to start off easy. I'm going to start off with just ethane. Okay, so simple, you know, two carbons. I guess we don't spend enough time thinking about ethane, but let's imagine that you look down this carbon-carbon bond. Okay, this is going to be the symbol for your eyes. So imagine your eyes pointing down this carbon-carbon bond. If you did and you looked at a projection down that bond, what you would see then is three hydrogens. Okay, so three hydrogens of the methyl group that's closest to you and then we would have the three hydrogens of the methyl group that's further away. Okay, make sense? Everyone's still with me, right? Okay, now, let's talk a little bit about how much more energy is required to go from this case. This is staggered. Two hours versus eclipsed. Okay, so let's rotate by, what is it, 60 degrees? Okay, so rotate down CC bond. Okay, so if we do this, we will then have the hydrogens eclipsing each other. Okay, when you make, when you write the, when you draw these projections, draw them so that you always have the Mercedes symbol. Okay, I find it easier to do it that way. Okay, so in this case, we've rotated down that carbon-carbon bond and now instead of having hydrogens that are staggered away from each other, they're now eclipsed on each other. Okay, this takes energy. Okay, so this is, each one of these hydrogens that's eclipsing each other is on the order of a k-cal per mole, so the, the difference in energy here is something like 3 k-cals per mole. Okay, and these are estimated numbers. Okay, now back in Chem 51a, when you first learned about this, how the staggered was greatly preferred over the eclipsed. I think it was described to you as being due to, being due to steric effects, right? This hydrogen runs into this hydrogen over here and the two of them are repelling each other. Turns out those hydrogens are really tiny. Okay, if we made models of what was happening here, each carbon would be about the size of a balloon and each hydrogen would be like a little pimple on the side of the balloon. So these little pimples aren't running into each other. Okay, it turns out that's actually not the commonly accepted explanation anymore for why it is that the staggered confirmation is greatly preferred over the eclipsed confirmation. Instead, what we find is actually it's due to a hyper-conjugative effect and I'll try, attempt to draw that for you here. Okay, so in the staggered case, whoops, I'll draw this this way. Okay, so this is a different representation still of the staggered case. Okay, so here's that one hydrogen. There's one coming out towards us, going down. There's one going out towards the back, going down. Up here in the back, there's one like that, one like that. Okay, so this is preferred because there's actually a very tiny residence called hyper-conjugation. Okay, and so what's going to happen is the following. That gives you H plus up here. Okay, and H minus down here. Okay, this is the world's lousiest of residence structures. It's one that probably hasn't even entered your radar. It's not even on your radar for consideration. But it happens to a very tiny but appreciable extent. In order for this residence structure to take place, for the residence structure to stabilize things, this hydrogen and this hydrogen have to be anti to each other. And you can only have this anti configuration in the case of the staggered confirmation of ethane. Okay, doesn't happen in the clips confirmation. Okay, so for this reason, amino acids and proteins will also adopt an anti configuration where possible. Okay, they're also going to be following a hyper-conjugated effect. Now, I could tell you that thinking about hyper-conjugation will eventually start to numb your mind and kind of hurt the brain. It just requires too much crazy brain power to think about all these residence structures. So it's simpler to think about it in terms of sterics. The steric arguments are also correct. Okay, but keep it or also will lead you to the correct answer. But keep in mind that the real thing that's underlying this steric business is actually a hyper-conjugated effect. Okay, now things get much more complicated as we start building up from two carbon molecules. Okay, and let's get started next with butane. Okay, so butane can start off as an anti-confirmation. All right, let's see. Okay, so here's butane. One, two, okay, sorry. We're going to do our projection down this carbon-carbon bond here. This one is coming out towards us. There's a hydrogen here, methyl group here. And let's see, sorry. Methyl group here, hydrogen, hydrogen. Okay, so one possible confirmation of butane, here's another possibility. Again, we're going to rotate around this carbon-carbon bond. Analogous to what we were doing up there with the ethane. In this case, we're going to be rotating 120 degrees. Okay, and if we do that, what will happen is we'll have a confirmation that has, that looks like this. Okay, so rotating. And then always keep one of these constant. Okay, and again, the projection here now looks like this, where we have our methyl group, hydrogen, hydrogen. And now, okay, which of these two is more stable? One on the left, two on the one on the right. Okay, let's take a quick vote. All in favor, right, raise your right hand. All in favor, left, raise your left hand. Okay, rights carry the day. So yes, indeed, this one is more stable. It's called an anti-configuration. This one over here is less stable because these two methyl groups are going to be running into each other. This is called a Gauch configuration. Okay, so this happens to be less stable on the order of 1 kcal per mole. Okay, so this one over here is, it requires an additional 1 kcal per mole. Okay, so where possible, we're going to try to find anti-confirmations of our amino acid side chains. Okay, so we've seen ethane. We've seen butane. Let's build up to one more complicated. Let's go up to pentane. Pentane gets much more interesting because it turns out that in addition to the Gauch configuration, you can get one more confirmation of pentane that's relevant. So here's pentane. Here's one possibility. You notice that there's five carbons there? This is one possibility. Here's another possibility of pentane. Which of these two is preferred? One on the left, one on the right. Right hands, all in favor of right hand. Okay, some right hands, all in favor of left hand. Left hands. All right, opinions divided. It turns out, actually the difference here is on the order of three to four Kcals per mole. Okay, and by the way, where I put the numbers, those numbers are at the higher energy. Okay, so this one is going to be greater in energy versus, we'll just say, three Kcals per mole. Kind of depends on what we start with. The reason is these two methyl groups are now banging into each other very strenuously. Okay, whereas up here, this methyl group is nicely out of the way. This is going to be called a synpentane confirmation. Okay, so there's terms that we're going to be using, staggered eclipse, gouge versus anti, and then finally, synpentane versus non-sympentane. Okay, and this synpentane is pretty big at somewhere between three and five Kcals per mole. It's really big. Okay, so proteins are going to do everything possible to avoid running into this. And they can run into this when they end up with beta branched amino acids. Okay, so this at its heart is what's driving formation of alpha helices and beta sheets. What's going to allow some amino acids to access beta sheets better than other amino acids. And why don't we take a closer look at that? Okay, so these are the terms. Everyone's comfortable now with the definitions of these terms? You kind of calibrated in terms of numbers? Okay, well let's get started then. Oops. Okay, so we've looked at eclipse versus staggered. Ethane, here's some numbers. This is, oh we didn't even talk about eclipse butane. That's huge. To my mind, five and a half Kcals per mole? That's just never going to happen. That's so large that it doesn't even enter the equation. Okay, so I think we can all agree eclipse butane, horrible. Okay, gals butane, higher by about one Kcals per mole, a.9 Kcals per mole, not so great either. But anti-lowest in energy. Okay, so next I want to look just very briefly at a few amino acids. It turns out that we can very readily predict which amino acids are going to have preferred configurations. Okay, and unfortunately I have to go back up again. Sorry, I realize there's a lot of ups and downs. Okay, why don't we take a look at the amino acid valine. Valine has an isopropyl side chain. And it turns out that valine actually will adopt one and only one confirmation, well largely adopt one confirmation. Okay, so what I'm going to do, let me just think about this for a second. Okay, so we're always going to have NH over here, carbonyl over here. Okay, so that will be our backbone. Okay, of the, I'm talking about these things, not just valine by itself but valine in the context of being in a protein. Okay, so valine has an isopropyl side chain. So one possible confirmation of valine is this. Okay, and yes, there's one other. Just make sure, yeah. Okay, so the angle defined by C alpha to C beta is called chi 1. And we're going to be rotating down these chi 1s as we start looking at amino acid side chains. Okay, so let's imagine rotating 120 degrees down this chi 1 angle. Okay, so this angle here in blue. If we do that, we'll have these two CH3s. So does everyone see that? Rotate 120 degrees down chi 1. One methyl group used to be sticking out over here. One was pointing down. We rotate 120 degrees. One's sticking off to the right. One's pointing down. Okay, right? You go from here. You rotate 120 degrees. You go to here. Make sense? Okay, what is the difference in energy between these two? Any difference in energy? Okay, I guess it's always good guess. The answer, actually, both of these are the same. It was a trick question, sorry. Right, because they both have the same number of Gauch interactions, right? So this one over here has one Gauch interaction. But this one over here also has one Gauch interaction. And I'm not even worrying about these two methyl groups down here, right? Okay, let's do one more rotation. Okay, same idea rotating around the chi 1 angle. Okay, so again in blue this is plus 120 degrees. Again, around this chi 1. And if we do that, okay, so again we keep the backbone constant. If we do that, we're going to go from a situation where we had one methyl group sticking up on the right, one going down, we rotate. And now we have two methyl groups sticking up. 100 down here. Okay, so higher in energy, lower in energy. Okay, okay, okay, I hear a lot of guessing. Okay, so up here this guy over here is subject to two Gauch interactions. One there and one here. So three Gauch interactions. Down here, how many Gauch interactions are present? Gauch-butate interactions are present in this molecule. Two. One. Two. Any others? Okay, so which one's higher in energy? Top one or lower one? By how much? One k-cal per mole. Okay, so this is lower, we'll call this up here, higher, we'll say lower in energy, lower by one k-cal per mole,.91 relatively similar. Okay, so when we look at structures of proteins, we find that this valine side chain is going to be predominantly in one confirmation. It is going to strongly prefer this confirmation versus other confirmations. Notice that it can rotate a little bit further, it can result in eclipse interactions. Those eclipse interactions are so high in energy, I'm not even going to consider them. Just thinking about whether you have Gauch-butanes and trying to minimize the number of Gauch-butanes means that valine will prefer one and only one confirmation. So, make sense? Proteins are a massive minimization of interactions like this Gauch-butane and one of the most dominant is something called allylx strain and that's what I want to talk to you about next. Okay, so let me first show you what allylx strain is. So, let's see. Allylx strain results from, okay, so allylx strain results from rotation around a bond that's allylic to a carbon-carbon double bond. This is the allyl functionality. It's three carbons, two are forming a carbon-carbon double bond, no rotation across the carbon-carbon double bond. Here are two other ones. And what happens is there's two kinds of allylx strain. They're numbered, they're called A13 and the other one is called A12. In the case of A12 strain, two functionalities are running into each other. One from the, that one that's attached to the middle carbon of the allyl functionality and one that's attached to the other side of the, one that's attached to the carbon that doesn't do any carbon-carbon double bonds. Okay, now there's free rotation around this carbon-carbon single bond, right? The carbon-carbon double bond is fixed. It can't rotate but the single bond is free to rotate. So it's going to rotate away from this A12 interaction. Notice it's A12 because this is carbon one. This is carbon two. A12, allylic one-two strain. This is huge. This is like a K-cal and a half or so. So as it rotates, it can actually rotate into a confirmation configuration such as this one up here where now it has another, in this case methyl group, begging into R. Okay, these two are going to be approaching each other in space kind of like the synpentane interactions. And again, this is going to be on the order of three K-cals per mole of deleterious energy. This is bad news. Proteins hate this kind of thing. So instead, what will happen is, preferentially you'll get rotation around this green carbon-carbon single bond and that rotation will push the hydrogen up into the same plane as the R functionality. Hydrogen though is small. It's not subject to this allylic strain. On the other hand, by doing this, you avoid having any functionality pointing down here that could be interacting with anything that's on this carbon-carbon bond right here. Okay, so you avoid A12 strain and you also avoid a little like 1,3 strain. And again, this is 1,3 because this is between carbon 1 and carbon 3. Okay, 1, 2, 3. Make sense? Okay, so now you're probably wondering, how could this possibly affect proteins? I just wonder like you said the A12 is trying to avoid, right? Yes. So why is this one more preferable in A12 than that one? Yeah, so, yeah, so actually sorry, this is, is errors are incorrect? Okay, very good. Send me an email, you get points for finding a mistake in the book. Thanks, John. Frustrating. Okay, let's look at this a little more closely. Thank you for asking. Here are some examples, okay? So in this case over here, here's a functionality that has methyl groups. In this case, we have the smallest functionality next to that methyl group, avoiding a little strain. Over here, if it rotates up, if we rotate around this, to have two methyl groups next to the methyl, next to the starting methyl, higher in energy by 3.4 k-cals per mole. And if the two of these are right up close to each other, that's like the synpentane interaction, which we know is highly disfavored, okay? Over here, a similar thing, okay? So even when you have a hydrogen attached to the carbon or the carbon-carbon double bond, you can still invoke some strain as well. I don't know why this is not shutting off. Okay, okay, makes sense? Geez, sorry, my thing is not shutting off. All right. All right. Shoot. This has stopped working entirely. Geez, look at this. Okay. Now, a little strain, it turns out, dominates the protein backbone because there's partial double bond character for each one of those amides that is joining together the amino acids of the protein backbone. And so it turns out that that partial double bond character is actually very common, like 40% of the time. Oh, thank you. Thanks so much. Thank you. So here's a regular amide, and you can get rotation around this carbon-carbon bond over here. Totally free rotation. Thank you. But 40% of the time, this amide is going to form a resonance structure that gives you this nitrogen-carbon double bond. And so now you start getting into a little strain between this oxygen up here and then this hydrogen over here. So this confirmation is strongly preferred, but if you rotate around here, you can't have a functionality up in the same plane as this oxygen due to that little strain. So the backbone of the protein itself is not all wiggly and flexible, instead it's going to adopt at each one of these amide bonds one and only one preferred configuration. This is a really important concept. This is why we keep seeing data sheets and alpha helices. This is why proteins aren't folding all over the place and giving you all kinds of crazy stuff. Okay, there's only one confirmation that's going to come out of this, and it's due largely to this little strain. Does this make sense? It's a concept? Okay, thanks, Krithika. Okay, here's what this looks like. So this is a histogram of angles for a protein, and these angles are defined as the angle between nitrogen and carbon as being a phi angle, and the angle between the carbonyl and the C alpha carbon, that's a psi angle, and their graft here are the phi angles, and here are the psi angles, and in colors, this is where we actually find proteins. Okay, so if the space isn't white over here, we've never found any protein that's naturally occurring that would occupy the space. By the way, this map was made originally by Ramacontran, and it's called a Ramacontran plot, and it's used very commonly to check the correctness of structures of proteins. Okay, so when we look at this, we find that there's two major mountains that dominate in this histogram, and one, these psi and phi angles define the secondary structure as that regular right-handed alpha helix that I've been showing you. Less commonly, you can also get a left-handed alpha helix, but this one is going to be the dominant one. We also find beta sheets, again, with a certain psi and phi angles, and so because these psi and phi angles are set by this A12 strain or this A13 strain, there's very little room for free rotation. Okay, this is why we only see two types of secondary structure. All of this stuff was worked out by the great Linus Pauling almost before protein structures were solved, just slightly before then, you know, just by true brilliance and just thinking about how molecules rotate in space. Make sense? Any questions about this? All right, well, let's look at some more complicated protein structures. The first kind of confirmation I want to talk to you about are disulfide bonds in proteins. We're going to get more complicated. So, they're disulfide bonds in proteins form a dihedral form. These are two sulfurs, the little shiny gold balls or two sulfurs that are bonded to each other to form a disulfide, and notice that the dihedral angle here is nearly 90 degrees. These form, these tend to prefer 90 degree form. There are two conformations, a right hook and a left twist. Okay? So, right hook, left twist. I don't know why I have them reversed, but that's the idea. So, two possible conformations, both with 90 degree dihedral angles, and you can see that very clearly by looking down the sulfur-sulfur bond of the disulfide. So, these are going to help to stitch together and provide spot welds to hold together proteins. They're not all that common. A very large protein might have one or two of these, or maybe as many as five, but it's not like they're going to be, you know, every other amino acid has a disulfide. These are relatively rare, and they are because they're covalent. There are good ways of covalently locking particular conformations of proteins. Okay, so this is a spontaneous process to form disulfides that occurs if you leave out files just sitting on your bench. They will go to form disulfides pretty readily by an oxidation reaction using air as the oxidant to form a disulfide. Okay? In biology, in cells, there is a source of file called glutathione where the otherwise slippery SH functionality is attached to a larger handle that's useful for enzymatic binding. And in practice, disulfide exchange happens very quickly and very readily, and this is an important way of reducing files found in proteins. Okay, so I think I've now introduced you to all the elements of conformational structure in proteins. We're now ready to look at the proteins themselves. Okay, so now you understand all of the elements that are kind of the toolbox for building large protein structure. Let's put them together. Okay, so first, in protein structure terms, there are three levels of descriptions of protein structure. The primary sequence is simply the sequence of amino acids, typically in one letter code. The secondary structure is the listing of these as alpha helices and beta sheets. I'm not sure exactly why that's not being shown here. Let me just show you on this one. So primary structure, the amino acids, secondary structures of the alpha helices and the beta sheets, these oftentimes fold up into discrete domains, and then these domains fold up into larger structures called tertiary structures, which in turn can interact with other structures, non-covalently or covalently, to result in quaternary structures and then even larger biological assemblies. Okay, so we now understand how to get the primary structures by forming amide bonds, either using carbodiamides or the ribosome. We now understand why it is that secondary structure forms, and our next step is to understand tertiary and quaternary structure. And I think I'm going to take the full two minutes to try to go as far as possible, and I'll pick up whatever I don't cover on Tuesday. Okay, so individual domains can fold independently. That will be our working definition of domains of proteins. These are regions of proteins that you can clip out of the larger protein, and they can fold up without the larger protein around. A typical domain is this beta sandwich structure called an immunoglobulin domain. Two beta sheets that are stacked on top of each other. This folding is driven by hydrophobic collapse. The interior of these beta sandwiches is hydrophobic, and the exterior is hydrophilic. And that's a concept we've looked at in some detail a couple of times before. If we look at the most common protein domains, as listed in the human proteome, what we find is that there are a few that dominate over others. And so what I'm going to do is I'm going to organize the rest of this lecture according to the most dominant structures. Okay, so starting with all helical proteins, zinc fingers are easily the most common. We've seen these before. We talked about these in the case of transcription factors. Okay, so in this case, the structure is held in place by the zinc ion, and there's a nice alpha helix on one side. Okay, let's stop here. When we come back on Monday or Tuesday, we'll be talking some more about protein structure and then on to protein function.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Instructor: Gregory Weiss, Ph.D. Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:28:13 Amino Acids and Proteins 0:30:53 a-Helices Form a Dipole 0:31:09 B-Sheets Come in Two Flavors 0:34:39 Secondary Structure - Backbone, Conformations 0:44:40 Conformational Analysis for the Cognoscente (White Board) 0:58:07 Amino Acids Examples (White Board) 1:05:01 Conformations to Watch For 1:09:38 Allylic Strain Dominates the Protein Backbone 1:13:10 Disulfide Bonds in Proteins 1:15:34 Protein Structure 1:17:50 All a-Helical Proteins
10.5446/18868 (DOI)
Okay, welcome back. We're back. We're going to pick up where we left off last time. Last time we were talking about RNA and all things RNA related. In particular today, we're going to be talking about translation of messenger RNA to make proteins. We'll be looking at kind of the intricacies of that, how it's regulated, and other aspects. And then we'll look at incorporation of unnatural amino acids into proteins. This is an important frontier. This is an important frontier in chemical biology because it allows us to expand the palette of what's available for doing experiments involving proteins. And proteins do a lot. But they only have 20 functionalities available to them. And in recent years, chemical biologists like Peter Schultz have been inventing ways of expanding that palette to go beyond the naturally occurring 20. We'll talk a little bit more about that in a moment. And then finally, we'll end today by talking about RNA libraries. Okay, next week we'll be on to, week six we'll be on to chapter 5, protein structure. And again, it'll be two lectures on protein structure. And then we'll be on to chapter 6, protein function. And again, it'll be two lectures on that. And we'll just keep rolling them up. Okay? So, any questions about where we're going? Things like that? All right. Okay, some announcements. I think I already went over these. I don't have to go over them again. I have office hours today. Courage you to come by. Alternatively, come by to the TA's office hours. My office hour next week will be on Wednesday. I believe it's 2.45 to 3.45. Okay. I already talked about letters of recommendation. Some last minute announcements on the book, on the journal article report. This is going to be due next Thursday, a week from today, at 11 a.m. It is essential that you submit both a hard copy to me and also an electronic version through the turnitin.com website. Along those lines, it's not really turned in officially until the hard copy is received and electronic version. I will not accept any emailed submissions. Okay, there's 120 of you and I don't want to get 120 PDF staff to print out. Okay? So, no emailed submissions. It must be received as a hard copy. Okay, so very briefly, let me review with you the requirements of the article choice. It's a good chance for you to think and make sure that you're following directions. Only research articles. You know it's a research article if it has a method section, if it has some experiments described in it, and an experimental section that discusses how the experiments were done. Now, sometimes those experimental sections are found in the supplementary material that's online to accompany the paper. So, nowadays when papers are published, typically it's published as kind of a bridged version, and then there's a second supplement that's also published online concurrently. And that supplement includes a lot of details that are too luminous to fit in the paper. Okay? Journals have a requirement that you can't exceed a certain number of words and it can't include a certain number of figures. But there doesn't seem to be any requirements on the supplement. So, what people typically do nowadays is have these monster size supplements. So, last year for example I published a paper that was four pages long, and then it had like a 25 page supplement, single space, you know, 25 pages with like an additional 15 figures or something like that. So, that's not all that unusual. And so, if you can find in that supplement materials and methods are experimental, or if you could find it in the actual journal article, then you know that you're looking at a research article, not a review or news and views. Okay, and then again, here's the journals that we're going to be using for this. One thing I need to caution you about is that this is the nature of the magazine, or nature of the journal, not nature pharmaceutical reports. Okay? There's probably 25 journals that have the title nature in them. Only two of those are acceptable for this project. One of them is nature, another one is nature chemical biology. All of the other variants on nature will not be acceptable. Okay? So, McMillan, which is the publisher of nature, again, has a large number of journals, and they might say slash nature on them, but unless they were actually published in nature or nature chemical biology, they're not acceptable for this project. Okay? And again, if you give me something and you didn't follow directions, I'm just going to hand it back to you on graded and tell you to redo it and give you a late grade for that assignment. So, it's essential that you get the journal article correct. Yeah. Question over here? This question is kind of for the whole talk. Yeah. I tried to enroll in Turnitin.com. Yeah. And it didn't work, and I was wondering if it worked for anyone else. Did anyone have trouble with Turnitin.com? You had trouble as well? Oh, so everyone had trouble. Yeah, like. Did anyone do it successfully? No. All right. Thanks for pointing out for me. I will have to take a look at that. Okay. Maryam, can you make a note? Okay. Thanks. Thanks for letting me know. That's good to know. Last two points. It must have been published in the last year. It needs to have the number 2012 or 2013 on it, and it must clearly focus upon chemical biology. So, it has to be a chemical biology article in the definition of chemical biology that we're using for this class, which all of you know. Okay. Thanks for pointing out about Turnitin. Any other issues that are coming up? An issue that came up in my office hours is how do you find a journal article that's relevant to your interest? I'm hoping you all know about PubMed. There's ways of restricting PubMed searches to specific journals, and I encourage you to use them. Okay. Now, if your interests are exceedingly obscure, like you're only really interested in, I don't know, dermatology, it's possible there were no chemical biology articles that covered, you know, epidermal cells in the last year. Okay. So, it's possible that you won't find any chemical biology stuff going on in that field, in which case you might want to pick another topic and expand your interests. Okay. But if your interests are like HIV or something, there are probably a dozen chemical biology relevant articles published in HIV last year. Okay. Maybe even more. I don't even know. Okay. So, it's possible you might have to change around your topic a little bit to suit what's available. And again, I highly encourage you to choose a topic for this assignment that will then lead you into your proposal. Right? That way then you're reading a state of the art paper, and when it comes time for you to propose something, you can basically take what was in the paper, apply that, and go one step beyond. Okay. That's a really good way to be creative. Read something that's really cool. Get inspired by it. Bring in some new technique or something like that. And then before you know it, you're on your own. Okay. Make sense? Okay. Any other questions about the assignment? Anything like that? Okay. I want to talk to you very briefly about scientific writing. As we've already discussed, this is a major portion of the grade, and it's really essential for your future career, I believe very passionately in the ability of the importance of effective writing. So, I want to give you a few guidelines. These aren't hard and fast rules, rather they're guidelines, that if you follow, I guarantee to you, your writing will be significantly substantially better than just everybody else's writing. Okay. So, the first of these is strive for simple, direct, clear sentences. Think of your job as being like a journalist, a reporter. You want to have like a Hemingway S style, meaning really short declarative sentences where each sentence is clear. At this point, your goal is to make your writing as clear as possible. Okay. The absolute clear as possible. And the best way to do that is have short sentences. If your sentence goes past about a line and a half, it's simply too long. Okay. There's a good chance that the reader who's going to be reading these things very quickly, right, that's the way everyone reads nowadays, will probably not have a chance to keep track of it. And so, that should clue you in that it's time to break the sentence up into something short. And every sentence needs to have a subject and a verb. And if you're choosing a verb, choose one that involves an active voice. Use the active voice. If you don't know what active voice means, please go see someone on campus who can help you with writing. There's a writing coordinator who can help you with that. If this business about active voice is totally mystifying to you, get it checked out, okay? You need to know what that means. Also along those lines, if the earlier thing I mentioned about PubMed doesn't make sense to you, go see the librarian in the science library. Okay. There's people who are expert at doing searches for the kind of thing that you're doing. Okay. So, you know, whatever I'm telling you to do, if what I'm telling you is totally foreign to you and totally unfamiliar, then it's incumbent upon you to seek out resources that will help you with this. Okay. And I can help you a little bit during office hours, but there's people who are even better than I am at writing on campus and even better than I am at doing searches. And you should seek them out and use their expertise as well. Double check your explanations for understandability or comprehensibility. This is really important. You should be able to take your journal article report after it's written and then hand it to the person sitting to the right of you. And that person should be able to understand it. So, make sure that it's understandable. That's really the true test. That's one of the things I'm looking for in good writing. I should be able to understand what's written. Okay. And then this is really important as well. Avoid pronouns that are unclear. This happens a lot in this assignment. It's very important that you specify precisely the objects and subjects of your sentences. Okay. So by pronouns, I mean things, I mean words like they, it, you know, things like them, these, those types of words are inherently unclear. So what happens is you'll have some sentence like, you know, biolacid drives up production of immune cells or something like that. And then the next sentence, it will say, these are terrible effects. And what I don't know is whether these refers to the immune cells or the biolacids. It's just not clear to me. And I know what you're thinking. I know you're thinking, oh, if you spend a little bit more time on it, it would be clear. But that's not the way you want to communicate. You want to communicate so that the reader has one and only one interpretation of your writing. And again, if you avoid pronouns where it's unclear exactly what's, what's being referred to, you can make your writing much more precise. And that's one of the things you strive for in good science writing. Okay. Questions about science writing, about style. This is the style that I want you to follow when you turn this in. And this is how I'll be thinking about it when I assign grades to the written section of your report. Okay. Questions about the style? All right. I want to talk to you finally about plagiarism. Again, this is one of those things that drives me crazy every year, no matter how many times I talk about it. This will be the last time I discuss it though. And the reason I'm going to discuss it with you now is I'm aware that not everyone knows what plagiarism is. Or certainly everyone who gets caught doing plagiarism claims that they don't know. So we're going to talk about it and define it very precisely. Okay. So plagiarism is borrowing someone else's words. And a relevant question is how many words do you have to borrow before it counts as plagiarism? Okay. So in science writing, obviously, you're going to be borrowing some words. Okay. Because you're going to be discussing the same sort of thing. But what I'm interested in is your own thinking about those words. Okay. So for example, if you're writing about Abel Kines, or this Abel protein, then I'm expecting you to borrow that word Abel. It's unavoidable. You can't get around it without borrowing that. But what I'm interested in is how you think about Abel, your own thoughts about this protein, and your own spin on this. Okay. So for example, if a particular clever sentence is something like although compounds that are effective in vitro prove to be so two-sided toxic, preselyo-assays, the reported inhibitors provided proof of concept for the efficacy of disrupting Abel. Okay. So that's the example that you found in the literature. And you agree with this. This makes sense to you. And you want to have a sentence like this in your own report. Let's talk a little bit about what plagiarism would be if you borrowed this. Okay. So what happens is what students will avoid or what students will attempt to do is they'll go through and they'll do a map to map version of the same sentence up here but in their own report. And this is what I call plagiarism. So for example, they'll replace compounds with small molecules and they'll replace effective with acceptable activity. And they'll say instead of in vitro, they'll say outside cells, two-sided toxic, proved toxic. Cellular assays in vivo, the reported inhibitors, the reported molecules demonstrate proof of concept, efficacy of disrupting inhibiting. Okay. To me, that's plagiarism. Okay. You basically stolen someone else's thought. Okay. Now, admittedly, you have used different words. You've done a one-to-one mapping of different words but you haven't told me anything new. And I don't care about someone else's thoughts. I care about your thoughts. The goal of this assignment is for me to learn about your thoughts. Okay. And the reason why I'm telling you this is not that I don't know that the whole world is all about ripping off stuff off the web and, you know, putting a new neem on it and stuff like that. That doesn't bother me. That's not my concern here. My concern here is that I learn how creative you are and how effective you are at reading something and then interpreting it in a new way in a way that hasn't been interpreted before. That's the goal of this assignment. So the goal of the assignment is not to simply recapitulate someone else's ideas. The goal of the assignment is for you to tell me your own ideas and that's what I want to grade. I want to grade you and not someone else. And so that's why I care about plagiarism. Okay. So let me show you how to do this so that you avoid plagiarism. Okay. So this one down here, this would be okay. Okay. So what you do is you take this sentence and you think about it a little bit. Okay. And you start to say, well, you know what? The problem here is first the sentence is cludgy. It's a mess. It violates the rule about too long a sentence, right? It's complicated. Short declarative sentences are better. So you're going to break it up. You're going to say the compounds reported in this paper were too toxic for cell studies. That's unavoidable. Okay. Right. This is a fact. There is no way that you're going to escape not being able to state the facts. You could put the facts in your report. In fact, you need to. It's the second part that interests me more, which is the interpretation. And what is said here is the report, however, advances cancer therapy by describing a novel mode, a small molecule inhibition, disruption of able. Okay. So what you've done here is you've put the report, this scientific discovery in the context of the larger field, which is cancer research. And then that's your spin on it. Okay. That's what you've done to help me know about your creativity. Okay. And that's really where that's the value added that I'm looking for in a good scientific communication. Okay. I know that you're going to have to restate the facts. You might even have to restate some of the experimental methods. That doesn't bother me. What I'm really interested in though is how you interpret those facts, how you spin the facts, how you put them in the context of chemical biology and in the field and in cancer research. That's the part where you get the A grade. Okay. That's the part that interests me. Okay. That's the part that I can say, oh wow, this person is thinking in a unique way. That's the part that I'm really looking for in this assignment. Okay. Does that make sense? Okay. And I'm not trying to scare you about this plagiarism stuff, but it is scary because later in your career you can get fired from your job for even small amounts of plagiarism. The great historian Doris Kearns Goodwin was caught out barring something like half a sentence. Half of a sentence was enough to tarnish a lifelong of work where she had achieved so much. And don't let that happen to you. Okay. That's not, it's not, it's not fair for all of your hard efforts. Okay. So you do not want to be in that position. And so now would be a time to resolve not to let that happen. Okay. Any other thoughts or questions about plagiarism? Does this make sense? I'm not giving you a definition. I'm giving you an example. Hopefully the example makes total sense. If it doesn't, ask now. Okay. So again, I will be searching for, I will have the TAs actually doing Google searches and searching for this. It's very easy for us to spot. And if we do, we do come down very heavy on this, very hard because this is an academic integrity issue. We will report people to the dean. There will be serious consequences. I don't want it to happen. And so if we manage to have a whole year where we have two assignments with zero plagiarism, then I'm going to bump the grades up that are on the interface between A's and B's and B's and C's. Okay. So that will happen for the whole class. So there's a stick. The stick is the dean's office. And there's a carrot. The carrot or higher grades help me get to the carrot side. I will tell you, I've been offering the carrot now for many years and I've never, ever been able to deliver it. This could be the year. Okay. I know. It depresses me. That's why I keep talking about this stuff. Because every time I have someone in my office, they're like, oh, I didn't think that was plagiarism. Well, now you do. All right. Any questions about this concept? Okay. Oh, yeah. Yeah. What's the first example of the acceptable accused side of the accused side of the class? Oh, I'm so, so glad you asked that. Okay. So this is brilliant. Okay. So the question I got was, what if you use this first sentence and then right after that you put a number 2 and that's the reference down to the paper that this was, this came from. The answer is no. That would not be acceptable because you'd still be claiming that these words are your own words. Okay. The way that would be acceptable to use this would be to use the first sentence, the published sentences, put it in quotation marks. Quotation marks designate that you borrowed it from someone else and then put the reference back down to the citation. Okay. That's really, really important. Everyone who plagiarizes includes references, not everyone, but 90% of the people put references to the stuff that they're plagiarizing from. Okay. And it doesn't count. That still counts as plagiarism, even if you reference the stuff that, the source that you borrowed the stuff from. So let's see. Are you here just visiting or are you here for the class? All right. Welcome. Here, why don't you have a seat? So that way then you'll be comfortable. So I just don't want you to look so uncomfortable for the whole class. So, okay. Well, I do worry about it. Have a seat here or here. So. Okay. You get the hot seat. Okay. Any questions about any announcements? Any other questions? That was a good question. All right. Let's move on. Here's what we saw last time. What we saw last time is RNA is this malleable polymer that folds upon itself as it forms Watson-Crick and Hoogstein base pairs. And this malleability is a really fantastic property because it gives this biopolymer lots of different shapes to allow it to access different structures and these structures as we're going to see today confer function. Okay. So one of the themes of the class is that structures of biopolymers leads to their function. Form follows function in biology. Not always, but most of the time. We talked a little bit about different base pairs. I also wanted to emphasize that the molecules we're talking about, the transcription factors, the enzymes, the RNAs, these are dynamic molecules. These are molecules that live and breathe, that have motions associated with them, that have kinetic and dynamic parameters associated with them. One of the dangers of teaching a class like this is that I show you a bunch of pictures of beautiful molecules. Okay. It's like going to the zoo or something. But instead of being at the zoo, you're at a zoo where everything is frozen in place. And you know that's not really the way animals exist. You know animals like to move around. They like to be roaming around the savanna or their cages or whatever it is that you're doing. Biomolecules similarly move around. They have dynamics. And when I talk about something like a transcription factor and I describe it, writing the rails of the phosphodiester backbone, I really mean it. That is exactly what it's doing. It is cruising on that DNA pie way as it looks for the correct base pairs to grab onto. This is essential. You must start thinking about these molecules as having a fourth dimension of having motions associated with them. And this is one of the frontiers in chemical biology. And it's an area that we need to continue to push and explore and understand better. Because in doing so, we're getting a much richer view of how things are happening inside cells. Okay. I'll try to continue to emphasize this point. We talked a little bit about how transcription factors scan DNA sequences at very high speeds. And then they form distinctly different interactions upon finding the specific sequence that they want to bind. In other words, they're zooming along these phosphodiester rails. And when they find that particular correct structure, they kind of scrunch down. And they form interactions either directly with the DNA bases or indirectly through water molecules with the DNA bases. And that's what allows them to bind to a particular sequence of DNA, recruit the other factors that are required for transcription, and eventually recruit RNA polymerase and kickoff transcription. At the end of Tuesday's lecture, we introduced you to this yeast 2 hybrid screen. This is a very powerful tool that allows us to test protein-protein interactions in cells. It's used pretty ubiquitously. I would say last few years its use has fallen off a little bit. But it's still one of the major tools that are used in say biochemistry, molecular biology laboratories, and even chemical biology laboratories. I told you about the variant that had two binding partners. There are however variants that have three binding partners where you can have say two proteins that are kind of like the bread in a sandwich and then a small molecule in the middle that's kind of like the meat in the sandwich. And the three of these things have to come together before the transcription takes place. And then it's also possible to look for things that push apart the interaction if you're turning on say a toxic gene. And so that's called the reverse 2 hybrid. And so there is you know half a dozen or so different variants of this yeast 2 hybrid available, this yeast hybrid idea that are available. But they're all based upon the idea that you can separate out the activation domain from the DNA binding domain. And in doing so you end up with something that then can be recapitulated, that can be reformed upon an interaction, upon formation of an interaction. Okay, any questions about what we saw on Tuesday? Questions about anything like that? All right, well I want to move on then. I want to talk next about translation. And actually I think I have just a little bit more to talk about in terms of transcription. And messenger RNA and then we're on to translation. Okay, so let me get to where we left off last time. Okay, so last time I ended with observation that bacteria and eukaryotes have very different levels of complexity in terms of their mRNA processing. Right, where bacteria have DNA that's transcribed and then this mRNA leads directly to translation. Whereas eukaryotic cells have DNA that's transcribed and then introns or inserts are cut out, the exons are rejoined and then the mRNA is modified. At one end there's a poly A tail that's added. At the other end there's a cap. And all this must take place before translation can actually happen. So why don't we dive right in and take a look a little bit closer at the chemistry of eukaryotic gene translation. Or sorry, the chemistry of mRNA processing before translation. Okay, so here's a little short, let me just get some water. Sorry, so it's very dry. Short summary of what the changes look like. Okay, so again DNA leads to transcription. You get this RNA transcript. The RNA transcript is first capped at the 5 prime end. And there is this G, methyl G cap that's added. And this is kind of a weird looking thing, right? It has a triphosphoester diester backbone. And it has some weird linking. This is 5 prime to 5 prime. And then you have this cap over here. But this evolved in a way to allow the mRNA to be shuttled very quickly to the ribosome. Okay, we'll talk a little bit more about how that works in a moment. At the other end, the 3 prime end, the messenger RNA is tagged with a long sequence of A's. So this is called a poly A tail at the 3 prime end. And then finally, the introns are spliced out. They're actually chopped out. They're either chopped out by an active process involving other proteins or sometimes just spontaneously. And then finally, the leftover stuff, the exons are actually expressed as protein. Okay, so there's a lot of modification that takes place after the messenger RNA is synthesized. And why don't we take a closer look at this? Let's start with this GTP cap. This is the triphosphate. There's the triphosphate over here. This is a weird looking sequence. Notice the extra methyls. There's one here. There's one here. Other than that, it looks kind of like a G. This has the function of helping to load the 5 prime end of the messenger RNA onto the ribosome. Okay, so it gets things going. The way this bond is formed for the methylation event is distinct from I'd say 99% of carbon-carbon bond forming reactions in biology, but it's also number 2 in terms of its importance. So for that reason, we should take a moment to talk about this. First, let me just digress for a moment. We'll talk later about how 90% plus of carbon-carbon bonds are formed in biology. They're formed using an aldol reaction. This actually is a rare example of a carbon-carbon bond. Oh, sorry. This is actually not a carbon-carbon bond. This is a carbon head-around bond. This is a rare example though, forming a bond to carbon, not through an aldol reaction. Okay, this actually uses an SN2 reaction, and it's a straightforward nucleophilic attack by the lone pair on this nitrogen. Notice that this lone pair is not involved. It not involved in aromaticity, so it is a very good nucleophile to attack the methyl group of this acidenticyl methionine. Acidenticyl methionine has a role of delivering methyl groups. Okay, and actually now that I think about it, this is that phrase up there is not so helpful to us. Okay, so apologies. All right, now, that's on the five prime end of the messenger RNA. On the three prime end, there's an appendix, a series of A's that are appended, and the exact number varies, but it can be really long. It goes between 50 to 200 bases of just poly A that are simply stuck on there, and I know what you're thinking. You're thinking this is a total waste of energy for the cell. Why would it bother doing this? Okay, what is up with that? And this is useful because it binds to a poly A binding protein. There's a protein shown here that evolved to bind to these poly A's, and that helps direct the mRNAs to the ribosome. Okay, so it turns out that's actually a useful thing. So these two ends of the messenger RNA act as specialized handles where they have a directionality, and directionality, as you know, matters a lot in the sequences of RNA or DNA, right? There's only one direction that leads to a correct sequence, the other direction leads to gibberish. Now, because all of the messenger RNA's are appended with this poly A tail, there's a really effective way that we can use to isolate all the messenger RNA's in the cell and throw away everything else. So what you can do is you can set up a solid support that has a bunch of T's bound to it, okay, and then hybridize that to all the stuff found in the cells. The only things that will stick are the messenger RNA's, which have a poly A tail. Okay, so in practice, the way this works is we use the carbodiabid, DCC, which we previously saw for forming amide bonds back in Chem 51, but here we're going to use it to form phosphodiester bonds, and what you do is you simply add an excess of T, deoxy, or sorry, this is T monophosphate, and with this DCC, and then in the presence of cellulose, this will react with the cellulose, the primary hydroxyl of the cellulose. Notice that this is cellulose. Cellulose, of course, is beta-D glucose that's polymerized, and here's the primary hydroxyl of the glucoses, and that will react with one of these T's, and then the T's will polymerize with each other in the presence of this coupling agent, DCC. Mechanism here is exactly like what we saw when we saw formation of amide bonds using DCC back in sophomore organic chemistry, back in Chem 51, and if that mechanism is not apparent to you, please go back to your sophomore organic chemistry textbook and look it up again. Okay, relearn that mechanism. That's a useful one. Okay, in any case, what you end up with then is basically paper that has a bunch of T's covalently linked to it, a poly T, just sequins, just kind of hanging out there in space, and you then solubilize this, you dunk it in water, and you flow over this, it extracts from the cell. So almost everything in the cell washes past the paper, except for the messenger RNAs, because the messenger RNAs are now going to form Watson-Crick base-pairing, A's to T's. So you have poly A on the messenger RNA, poly T on the cellulose, and the two of these hybridize to each other, and that allows you to isolate all of the messenger RNAs and wash away all the stuff that's found in the cell. Make sense? Okay, so this is very routinely used, but I don't think most people spend too much time thinking about how this is synthesized. It's pretty straightforward. Okay, let's talk about the next step in the processing of messenger RNAs. So after they're capped on one end with the GTP cap, and on the other end with the poly A, the introns have to be spliced out. There's a bunch of snurps, short nucleotide, ribonucleotide repeats that are that sort of ping pile on the introns, brings stuff together and set up a transphosphorylation reaction. Okay, this is where you get a transfer of a phosphodiester bond from here to here. So it's just a simple exchange, and that has the effect of cutting out the intron in this interesting lariat structure. Details here are not so important for us. Okay? This does, however, bring up the really interesting observation that RNA is capable of catalyzing reactions. And this is kind of our first example of this, that we're looking at in some detail. So I want to show you a more canonical example of RNA acting as a catalyst. And that example is the classic hammerhead ribozyme. Okay, so here's the structure of the hammerhead ribozyme in green. This is a naturally occurring RNA sequence. And in red, this is a sequence of RNA that's targeted for cleavage by this hammerhead ribozyme. And what the hammerhead ribozyme does is it orients a base close by to the 2 prime hydroxyl to deprotonate that 2 prime hydroxyl. There's also a magnesium bound, and that sets up a nucleophilic attack on the phosphorus of the phosphate. This is starting to look really familiar, right? We've seen ways to cleave RNA before, and you know what? This is identical to it. The only difference here is that the polymer organizing this, this, this, or catalyzing really, this attack happens to be an RNA. And so whenever we see a catalyst that is an RNA, that's catalyzing some reaction, we're just going to call it a ribozyme. Okay? So it's like an enzyme except it's made out of RNA. Okay? Recently, my colleague, Andrei Leptak, discovered that these self, that these RNA, are these ribozymes are very widely dispersed across all creatures found on the planet. He's found them in humans. He's found them in starfish and a whole series of other organisms. Again, all of these require magnesium. Magnesium is playing this key role as a Lewis acid. It's stabilizing the negative charge that's surrounding this phosphorus and making it a better electrophile. Okay. So in the cell, the cell has a messenger RNA, and then it has to eventually degrade it. So the cell, and furthermore, the cell is, you know, constantly coming up with stuff that, you know, that's getting barred to, would say, viral RNA. So there has to be a mechanism of destroying RNA after it's finished. Okay? So after its time has come, after the translation has taken place, there needs to be a way of degrading the messenger RNA. And for that matter, it's useful to be able to degrade RNA that's coming in from, I don't know, viruses and things like that. So this has been taken, okay, so the way this works, one way to target messenger RNAs for destruction is to use an antisense DNA. So the antisense DNA will recruit, well, after it hybridizes to the messenger RNA, it will recruit ribonuclease H. And this will then destroy the RNA, okay? So this idea of using antisense DNA as a way of targeting specific messages sent out by the cell would be amazingly powerful, right? We'd have a way, say, of shutting down cancer if we can target specific messenger RNAs that are associated with cancer. This would be very, very powerful, okay? So in recent years, there's, or not recent years, this has been going on for like 20 years. There's been attempts to develop antisense therapies. These are therapeutics that will do something exactly like this. They'll deliver a sequence that hybridizes to specific messenger RNAs and then recruits ribonuclease H to degrade that message and prevent it, okay? This is distinct from conventional pharmaceuticals which often feature a small molecule that inhibits some enzyme, okay? So the standard way to do this would be to allow the messenger RNA to be translated, resulting in an enzyme, and then destroy the, or not destroy, but disrupt the enzyme by inhibiting it using some small molecule inhibitor, okay? And we saw examples of this, right? We saw, for example, chloramphenicol, targeting chloramphenicol acetyltransferase, right? And so in this case, instead of like targeting the enzyme that results from translation, we're going to kill the message itself, prevent translation, and in this way, prevent this enzyme from doing its function, okay? So it's a really distinct mode of therapeutics. And it's, I would say, a couple of years ago, up until say two or three years ago, I was deeply skeptical about the whole thing, but there's been recent progress. I believe there's now two drugs that have been improved by the FDA based on this principle, and things are starting to look a lot stronger, okay? Here's what the problem was. Here's why this took so long, okay? So here's one example of a FDA-approved drug that uses this principle. The drug is called formavircin, and it targets CMB, cytomegalovirus, RNA, and it does this by, so here's the formavircin. It forms perfect Watson-Crick base pairing with the CMB RNA, and that in turn recruits RNA-SH, at which the scissors to chop apart this sequence, okay? Now the real problem here is that these sequences, the antisense sequences, oh, and notice that they're called antisense because they have to have the Watson-Crick base pairs, so instead of having it, so there has to be Cs and Gs and A's and T's lining up, although I'm looking now and that doesn't look so neat from this illustration, but you know what I mean, right? So here's T's and A's and G's and C's lining up, so that's why they're called antisense, but a major challenge is delivering in these biopolymers in a way that they could actually get inside the cell and be effective. Challenge number one is that both DNA and RNA are pretty short-lived outside the cell. We've already discussed RNA-SIS. There's plenty of RNA-SIS that are circulating. There's also plenty of DNA-SIS. Those tend to chop apart wayward strands of DNA or RNA that happen to be floating around, okay? So what people have been doing is modifying the backbone. So instead of a phosphodiester backbone of this antisense therapeutic, instead one of the oxygens has been replaced with a sulfur and that backbone modification prevents the degradation of the targeted sequence, okay? So that's one thing that's happening. Here are some examples of other backbone modifications. In one, the phosphorus is replaced entirely with amide bonds in a peptide nucleic acid, and perhaps the most effective examples of these are these morpholinoalligonucleotides that have this weird morpholine type of backbone. These tend to work really well, okay? These morpholinoalligonucleotides are used routinely in chemical biology and biology laboratories as a way of knocking out specific messages. So you can take some mRNA, take that sequence, convert it to an antisense, and then order up a morpholinoalligonucleotide, which incidentally is not cheap, but it can be done, and you can then use this directly in your experiments. Notice that the big change here is a change from having lots of negative charge on the backbone to having neutral backbones, okay? That helps quite a bit in terms of delivering the therapeutic inside the cell, right? Negatively charge things to have trouble passing through the phospholipid membrane layer that surrounds cells, right? We talked about how it has an outside that's polar and inside that's hydrophobic. Charge things don't like fitting through that hydrophobic region of the phospholipid plasma membrane. And so for this reason, these neutral things are more effective. Yeah. Could you go back to the article and take a different issue one more? I was just wondering what is at the bottom of the translation between those segments? Oh, yeah. Okay. Yeah, this means it doesn't happen anymore. There really should just be a big X here. Okay. Okay. Thanks for asking, Anthony. All right. Let me show you. So I've said before this is useful in the laboratory. I wanted to show you an example of this. Okay. So what we're doing here is we're interested in targeting a particular messenger RNA that encodes this vimentin gene. Okay. And if the vimentin protein is produced, it will be stained using an antibody and the antibody happens to be dyed red. So you will see it under fluorescence micrograph image. And I think I'm going to turn down the lights even more because this is a little hard to see but it looks a little bit better on my. So I'm just going to turn these off very briefly. So here's cells. In blue, this is the nucleus being stained with the fluorophore DAPI. It happens to bind well to DNA. I think we might have even seen a little bit about it earlier, seen its structure earlier. And again, in red, this is the vimentin protein. Okay. So this is basically the negative control. This is short interfering RNA that does not target the vimentin gene and it's really essential that you do these controls. Okay. So you've treated these cells with RNA but in this case, this RNA that doesn't have the antisense necessary to target the message encoding the vimentin. Okay. Now over here, here are cells that have been targeted using this S-I-R-N-A which again is this RNA interference mechanism that we've been talking about. But now the antisense targets the mRNA that encodes the vimentin. And notice that there's very little red. There's might be a little bit here but for the most part, it's totally clear of the red. Yet you can still see the nuclei of the cells. Right. You can still see these blue nuclei which is the DNA of the nuclei being stained. Okay. Can everyone see that? Okay. So this works really well. All right. And again, the way this is going to work in this case is you have a plasmid that encodes this S-I-R-N-A. Okay. And furthermore, it's even more complicated than that. What you do is you actually set up, so you have this plasmid. Remember, recall that plasmids are circular DNA. The plasmid encodes the sequence that's going to be the antisense sequence. And in practice, this actually encodes not something that's simply an antisense sequence, rather it encodes both the sense and the antisense sequence into a hairpin. Okay. So over here, this upper strand is the sequence, the mRNA. It looks kind of like the mRNA that encodes the vimentin gene, but it's just a little fragment of that. And then there's a little hairpin. Right. That's a loop that we've seen before. And then down here on the lower strand, this is the antisense sequence. So sense, antisense. Okay. So now what happens is this short hairpin is now a section of double-stranded RNA, and that activates a mechanism in the cell called Dicer. Okay. And Dicer goes through and systematically looks for any sense strands of messenger RNA that have the sequence and catalytically goes through and starts chopping those apart. Okay. And it chops one after another apart. Okay. And if you want to learn more about Dicer and Argonaut and the other proteins involved, you can read about it in the text. Okay. All right. Let's switch gears. I want to see any questions about mRNA processing? Questions about that topic. Okay. It turns out it's a really active area of research. It's always been active. It's always fascinating. There's new surprises that are constantly coming along. I want to switch gears though. I want to talk to you a little bit about what happens next. The messenger RNAs are eventually delivered to the ribosome. In prokaryotics, the ribosome binding site, or RBS, is something called a Scheindel-Garno sequence. It's more of a guideline than a sequence. You can actually get away with some variations on this Scheindel-Garno sequence. But it turns out that if you don't program it in, nothing happens. Okay. And every so often, you know, someone new joins the laboratory and designs their protein, you know, their construct to be expressed, and nothing happens. The cells refuse to take it up. And it's because they've forgotten this Scheindel-Garno sequence. So it is essential. In eukaryotes, there's something called a Kozak sequence, and this idea is the same. There's an area where the messenger RNA is bound by the ribosome, and that kind of gets everything going. Okay. Now, the actual ribosome catalyzing amid bond formation is a pretty straightforward reaction. Simply consists of amines attacking esters. Okay. So recall from back in Chem 51 that if you mix together amino acids and you boil them for a long time, you can form an amid bond. But the efficiency was very low, and you didn't have control so much over which amid bond was going to be made. Okay. So we talked about why it was important to activate the carboxylate of the amino acid to form an amid bond with greater specificity. Right? If you were in 51C with me, we had this conversation. And again, if this conversation about activation and DCC is totally foreign to you, totally confusing, go back and check a look in your textbook from sophomore organic chemistry. Okay. So DCC, I've alluded to twice in this class, and both times I told you if you don't know what it is, go back and look. Okay. So in this case, the cell doesn't have access to DCC. Okay. Instead, its activation agent is forming the amino acid into an activated ester. Okay. So here's an amino acid. It happens to be methionine. And R over here is the transfer RNA. And so what's going to happen is this will form an amid bond with a end terminus of a nascent peptide that will attack this ester. Okay. So this will be the first amid bond. And the second one will be the next amino acid delivered to attack transfer RNA of this threonine, et cetera. Okay. So the ribosome is stringing together these transfer RNAs that have activated amino acids attached to them. So I'm going to be referring to these activated amino acids as aminoacyl tRNAs, where acyl refers to the fact that these are formed into ester functionalities. Okay. Makes sense. Okay. And furthermore, it makes sense that we have this activated ester. Another way of thinking about this is that hydroxide is a terrible leaving group. And so instead of having hydroxide as a leaving group, we have alkoxide. It happens to be a special alkoxide and a catalyst, et cetera. But that's the idea. Okay. All right. So chemically what's happening here is the incoming amino acid is attacking this ester. Okay. So this is a straight attack of a carbonyl. And then you form this tetrahedral intermediate. The tetrahedral intermediate then collapses. And the result is formation of an amid bond. And I have two possible mechanisms here. One that can take place under basic conditions on the top and one that takes place under more acidic conditions on the bottom. Either one of these is legitimate. Okay. And both of them lead to formation of an amid bond. Okay. A straightforward mechanism. It's one that I'm hoping you're familiar with from back in the day. And if not, go home. Try it a couple of times for yourself. It should be pretty straightforward. Okay. So let's take a look at the ribosome itself. The ribosome is really a mega machine. It's a huge machine that has upwards of 20 different parts to it, constituent parts. These include both proteins which are shown here in blue and also RNA sequences that are all kind of put together. Okay. So here's the messenger RNA being red. And then here's the peptide being spit back out of the ribosome. Notice that the action site, the site of action called the active site is in the very center of the ribosome. And if you look at the center of the ribosome, it's mainly RNA. Okay. So in fact, the ribosome is a ribozyme. It actually relies upon RNA to catalyze this amino-lysis mechanism that I showed you earlier. Okay. Let me see if I got everything on here. Okay. So a mixture of RNAs and proteins, et cetera. Okay. Here's, it turns out that because it plays such a key role in the cell for protein translation, it's also a major focal site for antibiotics to target. It's hard for antibiotic resistance to emerge with this one because it's hard to not mess up the ribosome without losing its catalytic efficiency. It's just too important for the cell to start messing around with. And so many antibiotics target the ribosome. And one of these, for example, is the antibiotic tetracycline. Tetracycline binds directly in the active site up here and also has a lower affinity binding site down here. Okay. And it's shown in purple. Okay. So tetracycline, routinely given, anti-acne medication, effective way of killing off bacteria. It happens to have slightly higher affinity for the bacterial ribosome than the human ribosome. But the differences are fairly subtle. Okay. So here's the structure of tetracycline down here. It has four rings, hence the name. And again, this targets the ribosome. There's a whole series of different molecules that target the ribosome. Things like canamycin, erythromycin. This is another one that should be familiar to those of you who have bacterial infections at some point in your life. You probably encountered erythromycin. Okay. It's a macrolide antibiotic. We'll talk more about these polyketide antibiotics in a few weeks, probably towards the end of the class. But also it targets the ribosome. Streptomycin also targets the ribosome. Totally different structure. This is an aminoglycoside antibiotic. And then there are antibiotics that target not the ribosome per se, but the machinery that helps to load tRNAs up onto the ribosome. And there are two ways of doing this. One is targeting EFTU shown here. Keramycin and another one is targeting another protein called EFG, which is this one over here. And in any case, all five of these molecules operate by a common mechanism. They all operate by shutting down protein translation for the cell. Okay. So this is one of those areas where it's just really rich with lots and lots of different antibiotics. And we'll see this time and again, right? We talked about molecules that target DNA. We talked about molecules that target the ribosome. There's sort of like Achilles' heels for the cell areas that are real choke points that antibiotics can get in and mess up pretty readily and do it in a broad spectrum way where they're killing lots and lots of different species, really, of bacteria in this case. Okay. So let's talk about translation. So translation starts with a star codon and eukaryotes, the star codon encodes the amino acid methionine. So the end terminus of all proteins synthesized in eukaryotes starts off with a formal methionine. Notice that there's this formamide that's been appended to it. That's just another way of getting things going. And so, oh, sorry. This is the bacteria case. Bacteria starts with the formal methionine. Eukaryotes, no formal methionine. Let's take a closer look at the tRNA. tRNAs, again, bring amino acids to the ribosome as activated esters, as amino acyl tRNAs. Okay. And at one end of the ribosome, sorry, one end of the tRNA, the 3 prime end, the amino acid is loaded in as an ester. Way down here at the other end, there's three bases called the anticodon, which will try to hybridize to the messenger RNA. And if they hybridize, that tells the ribosome that it's the correct sequence, the correct amino acid that's being loaded in for amide bond formation. This is really essential, this base pairing between anticodon and codon loops. This is what allows the correct, the synthesis of the correct sequence, right? Otherwise, you know, you have your DNA up here, your messenger RNA, and your proteins. This is the last step, really, in the central dogma of molecular biology. This is what gives you the correct sequence that was encoded by the DNA in the first place. Okay. Now, here's the way this works. So, the messenger RNA is read out in three base pair sequences called codons. Okay. Each one of three bases leads to a different amino acid. And I'm showing you what the amino acids are of the 20 amino acids on this genetic code diagram. Okay. Now, here's the way you read this genetic code diagram. You start in the center, and this tells us, let's just start with G. Okay. So, if the first residue is G, and the second one is C, and the third one is C, so GCC would lead to alanine. Okay. C-A-G leads to glutamine. U-G-A, however, leads to stops. Okay. So, there's two possible stop codons. Sorry. Three possible stop codons that are useful. Those tell the ribosome, kick it off. You know, kick off the messenger RNA, you're done. Okay. And that stops the sequence. Okay. So, there are 64 possible combinations. There's only 20 amino acids plus some stops. So, what this means then is that several codons encode for the same amino acid. Okay. And in practice, there's some slight preference for some codons over others. And this preference is dictated by the levels of TRNA. So, there are some TRNAs that are present in higher concentrations in the cell. And in practice, when you design protein over expression, you look for codons that are more popular than the less popular one. There's some codons that are exceedingly rare inside the cell. And if you have a choice of, say, four different codons, in the case of 3D down here, you'll choose the most popular one. I don't remember what it is, but you would choose, let's say, ACC rather than ACU because it's represented more often in the genome. Okay. So, here's the way, here's what it looks like. DNA has a sense strand and an anti-sense strand. During transcription, a copy again is made of the sense strand and then this copy is translated out. The sequence up here results in the amino acid protein sequence down here. So, for example, ATG we've seen is a star codon. I didn't call it a star codon, but we know it encodes methionine. Okay. ATG, okay, methionine, right? Okay, so this encodes methionine. And over here, ATG as a codon at the DNA level results in methionine down here. Okay, similarly, GTG, GTG encodes valine. And so, over here results in valine. Okay, and so you can do this pretty readily if you have one of these genetic code, you know, wheel diagrams that I'm providing that is in the book. So, you can very readily figure out what a sequence of protein will result. Okay, makes sense? Okay. Now, crucial step. At some point, you have to load the correct amino acid onto the tRNA. If the amino acid is mismatched with the anticodon down here, the cell is in big trouble, right? This is essential to get the correct sequence out. And so, enzymologists debated for a very long time how the molecular recognition of the tRNA would work, with recognizing just three bases of the anticodon loop of the tRNA. And in practice, what we found is actually the enzyme responsible for this loading, an enzyme called aminoacyl tRNA synthetase. This enzyme is a monster. Okay? So, it forms a dimer. It's shown here in green. These are two tRNAs, one on the left side, one on the right side. And notice how this thing is just grabbing onto both of these. So, it's interacting not just with the anticodon down here to read out the sequence, but with lots of other places along the tRNA. And then, furthermore, up here, this is the active site where the aminoacyl, the amino acid forms an ester bond with this three prime hydroxyl. I'll show you in a moment what the mechanism of that reaction is. But again, notice that the aminoacyl tRNA synthetase engulfs the whole tRNA. It's in a bear hug. And so, there's more interactions than just the anticodon loop. And furthermore, earlier, do you remember I told you how tRNAs, especially, were very heavily modified? Back when we were looking at, say, the cloverly structure of tRNA, I said how heavily modified they are. That heavy modification helps direct the correct tRNA over here to the correct amino acid up here. And it's being read out by this protein, by this enzyme that's checking it over. Okay? Makes sense? All right. So, let's take a closer look at the mechanism. In practice, the mechanism involves activating the carboxylate. Because again, carboxylates are very inert. They don't like to be, they don't like to form bonds all that ready, hydroxide is a bad leaving group. And so, in practice, this is activated by forming a acyl phosphate intermediate using ATP as the activating agent. Okay? So, phosphate is kind of like nature's tosylate or mesolate. It's some super leaving group that's ubiquitous, that's found all over the place in biology. And this is going to work by forming a readily hydrolyzable bond. Okay? So, for example, the glutamyl tRNA synthetase starts with glutamic acid, glutamate, and activates this through an acyl phosphate intermediate. And then the glutamyl tRNA synthetase asks, okay, is the acyl phosphate intermediate available glutamate? And if it's not glutamate, then it hydrolyzes this intermediate. And if it is, then it adds the amino acid to the 3-prime hydroxyl of the tRNA. Okay? So, it's a little bit complicated. There's actually a couple of steps where things are checked. The tRNA is bound and it's gripped in a big bear hug where it's actually making sure that it has the correct tRNA, making sure by testing the anti-codon, but also looking along the length of the tRNA. And then different intermediates, acyl phosphate intermediates, are brought up to the active site. And the enzyme asks, is this the correct one? Is this glutamate? And if it's glutamate, then it forms a bond. And if not, then it kicks it off. And when it kicks it off, it actually hydrolyzes the phosphate of the phosphate intermediate. Okay. Any questions so far? Yeah, way in the back. It's insanely wasteful, right? You're burning ATP to do this. So, the cell invests an enormous amount in protein synthesis, okay? Which is one of the reasons why cells hate doing over expression if they can avoid it. Okay? There's a huge selection against, you know, when we do protein over expression in the lab and turn cells into factories for producing proteins, they would love to be able to avoid doing that effort if they could. Okay? There's a huge amount of wasted effort here. ATP is getting burned. Okay? Great question. Other questions? Okay. So, I told you that in bacteria and prokaryotes, they all end with an N-formal group appended to N-terminus. There's an enzyme called peptide deformalase. That hydrolyzes off this N-formal group. And in eukaryotes, humans, oftentimes the start methionine is hydrolyzed off using methionine aminopeptidase. This is simply a protease that hydrolyzes the amide bond here. Okay? So, it gets in there, hydrolyzes that amide bond, but does it specifically on the N-termini proteins. It turns out this is also a potential target for antibiotics. So, for example, the natural product fumigillin inhibits angiogenesis, which is the growth of blood vessels in human bodies. And it does this by using a very interesting mechanism. So, the natural product naturally has a three-membered ring and epoxide that is precisely positioned next to a nucleophilic aminosol functionality. Recall the aminosol functionality. We talked about it on Tuesday in the context of RNAase. Here, we're seeing it again in a different enzyme act of site. It's also neutral. It's also has, again, the pKa of 7 that we saw. And so, therefore, its lone pair is likely available to act as a nucleophile and be covalently modified when it attacks this epoxide. Okay? So, this is an example of a suicide inhibitor. It's suicidal because it gets in and then it reverts. Well, in this case, it's sort of reversible, but oftentimes irreversibly modifies the enzyme act of site and in doing so, kills the enzyme. Okay? I actually personally hate that word suicide inhibitor. I actually prefer Trojan horse inhibitor, which is a better word. It was coined by Conrad Block, who's a little bit one of my heroes. So, anyway, I'm, but it's caught on, so it's hard to deal. Okay. Now, why would you want to inhibit angiogenesis? Blood vessel growth is great. If you're at the gym working out, you certainly want to have blood vessel growth to feed those muscles that you're building, right? Okay. Now, the problem is when tumors start to grow, they have a voracious appetite. They are desperate for everything. They need more nutrients. They need more oxygen. They're really hungry. Okay? And so, they will attract blood vessel growth to them to feed the resultant tumor. So, an important anti-cancer strategy targets that blood vessel growth and prevents the blood vessels from growing. Those drugs are called anti-angiogenesis drugs. They inhibit angiogenesis. And for some reason, inhibiting methionine and aminopeptidase is a strategy for blocking angiogenesis to block the feeding of tumors, preventing their growth, and hopefully getting them to shrivel up. And it turns out that's actually an effective strategy when it's combined with other anti-cancer therapeutics. Okay. So, in addition to what I've shown you, there are higher levels of regulation taking place inside the cell that are regulating translation. And these are things, here's my favorite. I'm just going to describe one of the many possibilities. My favorite is a messenger RNA that at one end has its ribosomal binding site, its RBS, hidden in a hairpin. When the temperature in the cell is increased, the Watson-Crick base pairing of this hairpin breaks apart, exposing the ribosome binding site and then allowing the message to be translated. That's really elegant. Okay. That's the kind of elegant design that I really love. And in theory, everyone in the class could design temperature sensitive sequences that would get turned on at specific temperatures, knowing, for example, the Wallace rule. Okay. Now, we've talked about how this happens in the cell. We chemists are a creative lot and we're constantly looking for new ways to tinker with stuff and try to get better control over things inside the cell. And one really exciting area that has been really taken off in the last few years, but has been applied for roughly 20 years or so, is the idea of incorporating unnatural amino acids into proteins. And so to do this, what chemical biologists have been doing is hijacking the naturally occurring amino acid synthetases that are found in different organisms and then co-opting them into loading specific amino acids, unnatural amino acids. Oftentimes, these aminoacyl synthetases are modified. They're mutant proteins, so they're modified to accept unnatural amino acids. This is an analog of an amino acid tyrosine that would usually have a hydroxyl over here, but now has an amine. And that's really a cool experiment because now you can test what happens when I put a better base in place of the hydroxyl, if I put an aniline functionality in place of a phenol functionality. It turns out this is really powerful. It's something my own laboratory applies. We apply it just as a tool. There's other laboratories that are trying to extend it to other areas. It's really, it's something I encourage you to use in your proposals, okay? It's basically bread and butter technique used in chemical biology laboratories that eventually will spread to biochemistry labs as well. The thing is you can do all kinds of stuff if you can incorporate a natural amino acid. For example, you can incorporate metal-chelating amino acids, amino acids that form covalent bonds and the presence of UV light to form crosslinks. This is an example of a photo affinity tag. Amino acids that will react specifically of carbohydrates and amino acids that will form crosslinks in the presence of other functionalities such as an azide. So this is enormously powerful. And again, I encourage you to just use it. It's a very routine technique at this point, okay? This is something that actually works well enough. I have an undergraduate in my laboratory, former Chem 128 student who's doing it as we speak, okay? And it actually works pretty well. And that really impresses me, okay? Well, you know, he's basically taking a technique that is described in the literature that our laboratory has never applied before and getting it to work. Okay, any questions so far? All right, I want to switch gears again. We've talked about translation. We've talked about incorporation of a natural amino acid. I next want to end with a discussion of aptomers and RNA sequences that bind or catalyze reactions. Okay, so it turns out that you can make very, very large libraries of RNA. I mean, I'm talking enormous. You can make on the order of 10 to the 13 to 10 to the 14th different sequences. Okay, so that's a 1 followed by 14 zeros, okay? And you can have all those different sequences in a little tiny Eppendorf tube, small test tube. And from there, you can do all kinds of experiments on them. Okay, so for example, you can identify RNA sequences that might catalyze this reaction pretty readily, okay? And the way you would do this is you'd set up, so in this case, you're looking for something that will catalyze glycosylation of this amine over here. And so the way you will do this is you'll have some sequence appended and then you look for all of the ones that have a sulfur incorporated using mercury as a trap. Okay, okay, so that's kind of the overview. Let's look in at the details. So the key concept here is that sulfur and mercury form a very strong bond and you could pull out specific sequences that happen to have sulfur in the sequence. Okay, so here's the way this actually works. What you do is you start with some random DNA sequences where N is any of the four DNA base pairs. Okay, so you start with the four DNA base pairs, AC, G, or T in this position, AC, G, or T in this position, AC, G, or T, and you're probably wondering how do you possibly synthesize 10 to the 14 different sequences? Well, it turns out it's very easy. At every step in the process, you inject in all four DNA bases during the synthesis of the DNAs. Okay, so rather than just adding A's, you add a mixture of 25% A's, 25% C's, 25% G's, et cetera. So that gives you a random DNA sequence in order of 10 to the 14. Okay, so you have like 10 to the 4 different DNA sequences. You then use RNA polymerase. We happen to favor one that's used by a virus. Viruses are very good at getting their stuff to the head of the line. They're very aggressive enzymes, which makes sense. They evolve to be really aggressive like that. And so you can use this T7 RNA polymerase. That will then convert the DNA sequences into random RNA sequences. And then you can look for RNA sequences that incorporate sulfur. Okay, and so here's your compound that you're looking for a reaction with. If it incorporates sulfur by some transition state, then you can isolate that sulfur using a bond between mercury and sulfur. Okay, and the mercury is attached up to some solid support. Like the carbohydrate that we saw earlier, like the cellulose that we saw earlier when we talked about the poly T column. Exact same idea. Okay, so now the only RNAs that will get isolated are the ones that have sulfur incorporated that have reacts specifically with this compound. So you go from 10 to the 14th, just down to, I don't know, 20 or 30 that are doing something. This is really powerful because if you get 10, if you get a trillion or 100 trillion different sequences together, there's a good chance that you can find one or two that do something special in your sequence. And you can imagine evolving this. You could take that sequence, mutate it further, make changes down here, redo the selection, and then do it a bunch of times. In practice, we often go for like 10 rounds with these RNA libraries. And these are often called aptomers. They're RNA sequences that bind to some target. The inventor of this whole idea is going to be here at UC Irvine next week. Okay, so some of the pioneers in this area are famous here at UC Irvine, and a guy who's the president and CEO of a company that's set up around this concept will be here at UC Irvine giving a seminar next week. I'll send you the details. I encourage you to go to a seminar. It's going to be big. He's kind of a heavyweight in the field. Okay, last thought, there's an antibiotic called pyromycin, which manages to sneak into the ribosome and form covalent bonds by mimicking the aminoacyl tRNA. Okay, I don't know why I have this here. It doesn't look so interesting. Let's skip that. Okay, let's just end here on aptomers. So when we come back next time, we'll be talking about my favorite topic, proteins.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:21:06 RNA and Transcription Factors 0:26:04 Comparing Bacterial and Eukaryotic mRNA Processing 0:29:08 CTP Cap Methylation 0:30:30 Using PolyA Tails to Isolate mRNA 0:34:29 Eukaryotic Splicing of mRNAs 0:37:38 RNA Degredation Plays a Major Role 0:40:40 Therapeutic Anti-Sense 0:42:32 Modifying the Oligo Backbone 0:44:20 RNA Interference Used Extensively in the Lab 0:47:57 Where Peptide Synthesis Starts 0:58:26 The Genetic Code: The Language of the Codons 1:00:33 Decoding the DNA to Protein Sequence 1:01:43 How to Load the Amino Acyl tRNA 1:06:38 Post-Translational Modification of the N-Terminus 1:07:25 Inhibiting Methionine Aminopeptidase 1:09:59 Binding to mRNA Provides Further Regulation of Translation 1:10:59 Incorporating Unnatrual Amino Acids 1:12:38 Expanding the Protein Palette 1:13:48 mRNA Aptamer Libraries 1:18:18 Puromycin Allows Covalent Linkage to the Growing Peptide During Translation
10.5446/18866 (DOI)
Chapter 3 turns out to be just a juicy material that I can't help but extend it by one more lecture. And then next week we'll be back to our usual schedule of each chapter taking one week. Okay, so next week we'll be on chapter 4, the following week chapter 5, etc. What I want to talk to you about today is DNA reactivity with small molecules, DNA biotechnology, and then how this all impacts and pinches upon cancer. And then if we have time we'll be on chapter 4 which is RNA. Okay, so this topic of cancer is something that actually affects I think everyone in this room. I think all of us at some point or another know someone who is unfortunately dealt with this terrible disease. It also impacts my students and impacts students in this room. Every couple of years I'll have a student who tells me that they fought cancer at some point or another. So it's one of those terrible diseases that strikes all too often. Anyway, our goal is to take a 10,000 foot view, high level view, at the biology cancer and then zoom down and start to understand some of the treatments that are used to treat cancer. Okay, some office hour announcements. I have to make my office hour a little earlier because I have to catch up with the light to San Francisco tomorrow. And so my office hour is going to be at 1.45 to 2.45. And then my Thursday office hour is going to be cancelled. And I assume no one is interested in talking to me after the midterm. I know well how that works. So those are the office hours. Let's see what else is happening. Chris, I got a S for office hour today. And Mary, I think you're office hour in private. I'm sure they want to chat with you all. Okay, so that's the plan. I'd also like Chris and Mary to plan for one extra office hour this week before the midterm. Okay, so thank you very graciously. Thank you guys. So you will get an email announcing additional office hours as well. But I will have an office hour tomorrow at a little bit of an extraordinary time. And then after that, the following week, we'll be back to our annual time. Okay, announcements. Okay, midterm one. That's coming up in two days. So it's striking really soon. First, there's going to be a review session by the TAs. And the review session, like the midterm, is going to cover through today's lecture. Okay, so everything that's on today's lecture or from the very first lecture is fair game for the midterm. Okay, and if you want to know what's going to be on the midterm, focus on the lectures. Okay, that's what I'll use to drive the midterm as I write it. The seating will be assigned and it's already posted. Okay, great. So this has already been posted to the website. It's essential that you sit in your correct seats. The seating will be checked and we'll be checking IDs at the same time. The way this will work is halfway through the midterm, the TAs will make an announcement to pass your IDs to either the right or the left. And then they'll walk around and check the IDs and make sure that everyone's seated in the correct seat. Okay, so you also need to bring a UCI student ID. Now I realize some of you don't have your student ID for some reason. Bring a California driver's license. A photo ID will accept that instead. Okay. You don't need any notes, calculators, electronic devices. And it's really important that you don't answer your phone or pick up your notes or anything like that during the midterm for obvious reasons. Okay. Questions about the midterm or about anything else that you'd like to go about? You guys are so ready for this. Alright. Discussion on Friday. I think we need to, right? We already have a new discussion worksheet that's posted and we'll get too far behind if we don't have discussion on Friday. And then we'll do a discussion for the same for that. So we'll post it up to you. And you have the only discussion about this person's midterm tomorrow at 10. Yeah, I hear you. Okay, so we've posted the discussion worksheet and it covers the material that's covered in today's lecture. So the answer is yes. But the worksheet and the key will be posted. Why don't we go ahead and post that today? Okay. The key is gone. Usually we post the key after the discussion. This week we'll post it before. Okay. Is that alright? Okay. Other questions? Okay. Again, if you want to know what's on the midterm, take a look at the practice midterm that I've already posted. Take a look at the problems in the book, the ones I've assigned in the book. Take a look at the worksheets, the discussion worksheets. When I go to write the exam, what I do is I sit down with all that stuff. And then I always have some concept problems as well. And I pull those straight out of the lecture. Okay. So the book is chock full of information. It's a textbook and it's written for a high level audience. But you can focus down your study just by focusing in on the stuff that's being covered. And then I'm emphasizing by assigning it in problems. Okay. So you don't necessarily have to memorize the whole book. Okay. Alright. Well, I wish you the best of luck. It's gonna be fun. You guys are gonna do really well and press me. And I'm gonna be absolutely thrilled to see how well you've got to be. Okay. Okay, these are the announcements for this week. Start reading chapter four, work the odd and asterisk problems. Again, the midterm this Thursday. Okay, so no other questions about the announcements, right? Last chance. Okay. So last time, what I was showing you is that DNA and circular plasmids can be folded to an astonishing degree. And I think all of us were blown away when we saw those smithy faces of DNA. That was plasmid DNA to which a large number of other single-stranded DNAs had been annealed to give you regions of double-stranded DNA that forced the plasmid into the happy face structure. Okay. Or the little map of the world structure. And that's pretty tour de force type of stuff. It's happening routinely in laboratories around the country. And it's something that you can sort of take for granted at this point. It's doable. We also talked about how you can use plasmid DNA to program cellular biosynthesis. And I told you about the requirements for plasmid DNA, that there has to be some selection marker and there has to be an origin of replication. Once you have those two requirements, you can encode all kinds of genes in that plasmid with all kinds of instructions. So the instructions can include things like, start building this factory that's going to produce diesel fuel. Or start doing this particular function that causes you to glow green. Okay. So the ability of program cells using the plasmids has really taken off. And again, this is another technique that's sort of in the toolkit. And it's just commonly accepted as doable. And you can all simply start applying it on your proposals. Okay. Now, the other thing that we saw last time is that DNA polymerase is a remarkable machine. Note that it cranks out 1000 covalent bonds per minute, not 1000 per second. Okay. So it's really chirping along. That's a very, very fast speed at 1000 per minute. And note that it's doing this with perfect fidelity, nearly perfect fidelity. It's one mistake every 10,000 or so. And that's truly remarkable. And what else do I want to tell you about this? We talked about the model, the structural model, et cetera. Are there any questions about what we saw last time? Yeah, over here. Chelsea. A question about the, what we just said makes one error rate in the system to say that it's included in the total corrects. Yeah. So Chelsea's question is, does the one error every 10,000 or so, does that include error correction? And the answer is yes. So without error correction, its rate of errors would be the higher. But it's still really basic. I mean, and also I should say that it varies quite a bit depending upon which polymerase you're talking about. Okay. A reverse transcript is notable for having a much higher error rate. Okay. Any other questions? That's a good one. All right. Let's move then. Last time I ended up by showing you that you can manipulate the DNA of organisms as a way of studying the phenotypes of organisms. And on the last slide that we discussed, I showed you ways to randomly change DNA. And I'll talk some more about that at a moment. Before I do, though, I want to talk about how to actually program the DNA to have specific changes. Okay. So rather than going out and simply blasting the DNA and making changes here and there, which is kind of an unintellectual way to do it, a much more satisfying way would be to go in and spot this, you know, spot weld in specific changes. You know, make a change here, make another change here, and have some hypothesis about those types of changes. It's a fundamentally different way of doing genetics than going for lots and lots of sort of random changes. Okay. So to make the changes specific, what people do is take advantage of the fact that you can cut DNA to have a little bit of an overhang shown here in red, and then this overhang, if it has complementary Watson-Crick base pairing, that's A's and T's and G's and C's, with that complementarity, these two will come back together, and then you can use another enzyme called DNA ligase to finish up the last covalent bond and hear where these arrows are. Okay. So simply by bringing together these two big pieces, they will find each other, form the perfect Watson-Crick base pairing, and then re-heal. Incidentally, a technique not unlike this was used also for the, for a lot of the DNA biotechnology that I showed with the folding of DNA. Remember where I showed, for example, patterns of DNA, not just the smithing faces, but other ones that involved a lot of these sort of techniques where you're cutting and pasting DNA. Okay. The results of this can be all kinds of interesting phenotypes, extra little bumps over here, frizzled little wings, so little tripled up wings over here, and what's neat about this is that you have some hypothesis going into the experiment. You could say, I think that this particular gene is the gene that causes wings to extend out, and then you could test that hypothesis by disrupting the gene and seeing whether or not you get these little short, shriveled, you know, little, shrippy little wings, and you can use that sort of knowledge to confirm and potentially learn even more about how development works in other processes. Okay. So let's talk first about how to make DNA that has these overhangs. They're sometimes called sticky ends. So again, this is the overhang also known as a sticky end. So to do that, we take advantage of a special pair of scissors, which like normal scissors has a similar sort of symmetry. This is what it looks like structurally. This little hollow region in here grabs onto the DNA and like scissors, cramps down around it. When these cut, these class of enzymes are called restriction enzymes. When restriction enzymes cut DNA, they target palindromic sequences, specific palindromic sequences. Each restriction enzyme targets typically one and usually only one sequence of DNA. And note that this is a palindrome. You all know what palindromes are, right? Palindromes are sentences or phrases that are the same that have meaning if they're in both directions, right? So for example, Napoleon was said to say when he arrived on the island of Elba, Able was I, ere I saw Elba. Okay, so do that backwards and you see Able was I, ere I saw Elba, right, both ways. Okay, so that's kind of a famous palindrome, okay? Madame, Madame, etc. Okay, so notice that these sequences are similarly palindromic, right? So this goes CAA and then over here, okay, so it goes, let's see, so over here it's, let's see, okay, so over here, GAA. So this is the palindrome, TTAA and then over here, TTAA. Okay, so see there's a palindrome right there, it's the same going either backwards or forwards. So anyway, so these sequence, these restriction enzymes have a symmetry to them just like scissors and that symmetry in turn dictates that they're going to be chopping apart palindromic sequences. When they make the cuts, these particular cutters are going to cut this bond over here and this other bond over here that are indicated by the arrows and the result will be DNA that separated and has a sticky edge, two sticky ends. By cutting apart these two different phosphodiester bonds, this one and this one. Okay, so here's another picture of this. Okay, so here's the enzyme and grabbed on, this is now a side view, the enzyme is grabbed on to the DNA. I've highlighted the DNA in purple and yellow to emphasize the two pieces that are going to come apart and here it is and it's double-stranded configuration and then here it is after the restriction enzyme chops it apart and the result now is a sticky end that will look for a complementary Watson-Crick base pairing sticky end and when it finds it, it will then rehybridize, reform the Watson-Crick base pair and you can then use this to glue together sequences of DNA. Okay, so typically the way this is done is you glue it back together and insert it into a plasmid that includes an antibiotic resistance marker. That's the selection marker that we talked about last Thursday, right? And I showed you a large number of examples of this. I showed you that you can use a plasmid that includes beta-lactobase, chlorophyte-colosetal, transparase, etc., etc., tetracycline resistance, etc. Okay, so these sticky ends turn out to be very useful for pasting in new DNA. You can take advantage of that and do all kinds of things. You can insert in entirely new sequences and here that happens to have the perfect sticky ends and then they will anneal and so now you're spreading apart the yellow and purple pieces. Okay, so here's what this looks like in practice. Here's your plasmid. You chop this apart by restriction enzyme that gives you a plasmid that has two sticky ends and then chemically synthesize foreign DNA or maybe you get the foreign DNA by PCR or some other technique and you set this up so that it has a nice sticky end hanging out over it and then you can re-anneal this blue DNA together with the red plasmid DNA and the last step here is an enzyme called DNA ligase that makes the last 5-prime Bosphodiester bond and you make perfectly covalent, closed circular plasmid DNA and then you send this back into bacteria and there's ways of punching holes in bacteria to get this plasmid DNA to flow in and only the bacteria that take up the plasmid will be allowed to live. All the rest are going to die because you're going to treat the bacteria with some antibiotic and only the ones that have this plasmid that includes resistance to antibiotic are allowed to survive those conditions. Make sense? You keep the plasmid DNA from re-annealing back onto the oxygen. That's a good question. It's Anthony, right? Anthony's question is a really good one and it's one of those things that tries people nuts. Okay, this question is what happens if the DNA, if the plasmid DNA, re-anneals and closes back up? So this is a little bit technical, but what we do is we chop off the 3-prime hydroxyl, or sorry, the 5-prime hydroxyl over here on this sticky end. Okay, so there's some 5-prime hydroxyl that's hanging off, sorry, not hydroxyl, 5-prime bosphate off the 5-prime hydroxyl. We chop that off, so now it's just a hydroxyl, no phosphate, and then it can't re-close. The ligase can't operate on it. Okay, so that solves that problem. The problem is we still end up with some plasmid that comes through, and there's always some, you know, picking in and sequencing and stuff like that. Okay, so let's talk about modifying proteins. Protein modification is a tool that's used very routinely in chemical biology laboratories. It's what, again, I encourage you to use in your proposals. Very, very straightforward. Okay, so the way we do this is we change the DNA sequence and then use those changes to the coding DNA sequence to result in changes to the protein. And so one good way of doing this is something called quick change mutagenesis. This is actually spelled correctly. It's a tensely annoying to me. German company came up with this one. But, yeah, anyway, so this quick change mutagenesis works by having new DNA sequences shown here in blue that encode some mutations. Okay, those are the X's. And then you use DNA polymerase to fill in the rest of the plasmid. So that's shown here in blue. Okay, and the tricky part is you then treat this result in double-strand DNA with a special restriction enzyme that operates only on methylated DNA. Okay, so DNA that's had methyl groups transferred to it. And do you remember earlier we talked a little bit about DNA methylation as a method for modifying DNA after synthesis? I just mentioned it in passing. But E. coli also do methylation. And so this means that this enzyme will chew apart the green strand of DNA and also the yellow strand of DNA from your original E. coli and leave intact the blue in vitro synthesized DNA. Okay, so the yellow and green is chewed apart by the special restriction enzyme and the blue remains intact. And then, so what this means then is when you transferred it to the cells, a process called transformation, the cells that take up this blue DNA preferentially because this yellow and green stuff is chewed apart and hopefully you'll get your mutation preferentially. And this actually works with very high efficiency. You can get, I would say, 50 to 90% efficiency out of this process pretty readily. Okay, any questions about how to make mutations? Okay, so everyone feels comfortable designing a mutation, mutagenesis experiment? Okay, well this is really powerful because now you can go in and test specific hypotheses. For example, in this protein over here where red, this is a heme cofactor, you can test the role of residues such as histidines that are interacting with this heme cofactor. So let's say you wanted to know what is the imidazole functionality of this side chain contributing to the ability of this protein. You know, let's just say that this is some electron transfer protein and you have some hypothesis that an aromatic ring in the protein is really key for transferring electrons here between point A and point B within the protein. You could then mutate the protein, replace the aromatic ring with just a methyl group. Okay, so you've now taken out that methyl, that aromatic ring, and you can then test the variant proteins, the protein mutants, which we'll call variants, and test whether electrons can still move between point A and point B over here. That's really powerful. Okay, if it turns out the electrons still move, maybe your hypothesis that the aromatic ring is critical was wrong. Or maybe it turns out that they can't transfer anymore and the aromatic ring is really crucial. So this allows you to do reverse engineering of proteins. You could take apart specific pieces of the complicated machine called a protein and then examine what each one of those little parts does to attribute to protein functionality. Okay, any other questions about DNA biotechnology, otherwise I'm going to move on? Okay, there's a lot to talk about here. This is an absolutely fascinating field. It's one of those areas where there's sort of constant research activity. There's always new techniques that are coming along, and it's also tremendously fun as well. I would say this is really one of those sort of fun techniques like organic synthesis. You get to make stuff. It can be frustrating at times, but when it works, it really works well. Okay, I want to switch gears now, and I want to talk to you about DNA reactivity with small molecules. We're going to start with the earliest type of reactivity of DNA, which is simply you walking around in the sun. After this lecture is over, you start walking back to your dorm room. Although it's January, there's still a little bit of sun out there. The chances are you'll raise your eyes to the sky and be grateful you're living here in beautiful California. During that time, your face, your skin is being hit by UV radiation, and that UV radiation is causing damage to the DNA in your skin cells. Here's what's happening. If you happen to have two binding residues, two T's, that are stacked on top of each other in a DNA sequence, these could do a 2 plus 2 photopatalyzed cyclization to result in a cyclobutane structure, an adept of your DNA. Again, this is a straightforward reaction, and I'll just very briefly show you the mechanism here. It's two olipins, and you're adding light, which again, as usual, we abbreviate as each new. The result here is a cyclobutane, a straightforward mechanism, not particularly complicated. Here's the problem though. When this happens, this cyclobutane distorts the DNA. The DNA no longer has this nice, deep structure that we're so used to, and this is a problem. This is something that your cells have evolved to deal with, because organisms on this planet have always faced UV light. Even before we were destroying the ozone hole, there was still UV light that was sneaking through. A series of enzymes have evolved to fix this problem, and a goldfish, and then rattlesnakes, and other organisms that spend their lifetimes out in the sun, they actually have an enzyme called photoliase that reverses the 2 plus 2 photocyclization that I'm depicting over here. Let me just make this arrow a little bit better, because it's going to annoy me all lecture otherwise. In this retro 2 plus 2, this enzyme harnesses UV light and then pumps electrons into the cyclobutane adduct and breaks it apart. The irony here is that it takes the same energy, energy from UV light, and this time uses that energy to reverse the reaction and drive it in the opposite direction. Now to do this, this enzyme has evolved antennas to capture that UV light. The enzyme has two antennas highlighted here in yellow and red, and I'll show you chemically the structures on the next slide. One of these antennas is tuned to absorb the UV light and pass excited electrons to an FADH2 co-bacter, which then pumps the electron into the cyclobutane. Let me show you what it looks like over here. So here's the antenna that absorbs the UV light. It's called MTHF. Its structure is shown here. The problem is, okay, so this works great. So the E. coli, the goldfish, the rattlesnakes, etc., they're fine. They can spend their days out in the sun. Us humans, though, we humans, however, have not evolved this enzyme. We don't have photolyase in our genome. We're not capable of running this reaction. And so what we do instead is we slather on sunscreen. That's what we've evolved to do in terms of we've evolved. Intelligence and intelligence has come up with ways of dealing with this. And notice that the structures of sunscreen, this para-amino benzoic acid or pava molecule, looks very similar to a key constituent of the UV absorbing antenna. Used by photolyase. Okay, so again, we don't have DNA photolyase. And the sun causes the DNA damage I showed on the previous slide. So this pava can, when it's on the surface of the skin, absorb the UV light and then radiate it out as simply heat. Okay, so the energy is converted from being a electromagnetic energy that's going to excite electrons and cause the 2 plus 2 photoscyclization I showed. And instead, it's going to be dissipated as heat instead. Problem is, over time, of course, pava has absorbed enough UV energy that it's no longer so effective. It starts to break down. And so, and also the other problem is that the stuff is soluble in water and sweat. And so over time, it loses its effectiveness. And so you kind of have to be out there continually slathering yourself in this stuff. And no one is more expert in this than myself because of my skin color. And so, you know, Sun-Tan, Lotion and I are very good friends. Okay, now let's talk about other fronts of DNA. Another problem that DNA encounters is that small molecules can also target DNA. And one example of this is a compound called soralin, which is found in limes. Okay, and so, for example, so it limes the skin of the limes, have this compound soralin present in them. And I guess this is most best dramatized by a group of school kids at a Baltimore day camp who are making pomander balls. Okay, and I've had no one in this audience knows what a pomander ball is. Anyone? Okay, well, I'm not too surprised. I didn't know either. Okay. Um, pomander balls are these limes that have clothes stuck in them, and they're often wrapped with ribbon. So this is an example. Okay, so this is a lime, and then some little kid has stuffed a bunch of clothes onto the outside, and you're probably wondering, why would anyone want to do this? Okay, so, evidently, people then throw those in with their underwear in their underwear drawer, and it makes everything smell kind of nice. Okay? Um, that's the idea at least. Uh, so anyway, a bunch of kids in Baltimore were at a day camp, and they were making these, um, these pomander balls using limes. And so they're rubbing this lime juice, uh, and lime oils on their skin. And, um, what ended up happening was their skin broke out in all kinds of lesions, just sort of erupted in these terrible red lesions, as the soralin in the lime reacted with their DNA. I'll show you pictures in a moment. Okay? Um, but let's take a close look at the chemistry that's happening here. So what's happening here is, oh, actually this is predictable. Does anyone want to hazard a guess as to how this compound interacts with DNA? James? Intercalator. Intercalator? Yes. Pie-stacking? Exactly. So this is one of these flat aromatic compounds that's going to slip into the pie stack of the DNA. Right? Okay, so now imagine this. You're rubbing your skin with this stuff, and now it's slipping into your DNA. That's kind of scary, isn't it? Okay, and here's what it looks like structurally. So here's the soralin. Here it is in the pie stack, and it has, um, olefins perfectly positioned to react with DNA and form, again, those 2 plus 2, uh, uh, after a, uh, uh, photo-catalyzed 2 plus 2 cyclization reactions form, um, these cyclobutanes. And again, the problem here is that the DNA is now cross-linked. It's distorted. It can no longer be used for replication. It could inappropriately cause transcription, right? Because now you have this hydrophobic mass in the middle. So when the transcription factor comes along and starts scanning the sequence of DNA, it gets all confused, and that's bad news. It could start inappropriately turning on, uh, genes that shouldn't be turned on. Okay? And, um, what happens is, uh, you get these terrible red rashes. Okay, so I have some, uh, graphic pictures in the next few slides, and if you're kind of a softy like me, turn your head away, okay? But if on the other hand you're planning to go to, I don't know, um, you know, obstetrics or something like that, or, you know, you really like gory stuff, um, the next few slides are for you. Okay, I can't resist, right? It's part of the fun part of learning about this stuff. Okay, so here's what it looks like. Um, what it looks like is you get these sort of red rashes that look like that, and maybe this isn't so dramatic. But, um, what ends up happening is on the skin, uh, you basically have, uh, lesions, wherever it is that that lime stuff, uh, you know, uh, comes in contact with your skin. Okay, so this happens a lot with farmers who are out picking limes or celery or other fruits or vegetables that have sorrelants in them, right? Because that's the part that's exposed. Um, it also happens to bartenders, right, who are mixing margaritas in the sun. Um, but it also happens to college students, right? So, um, if you happen to be doing belly shots, and here's the little, the juice of the lime that's, you know, kind of like, uh, you know, uh, running across this person's abdomen, you end up with these bizarre streaks. They basically everywhere the sorrelin managed to get to, and that can be a lot of places. Um, so, uh, for this reason, college students are at particular risk for this. So, um, that's one of the several PSA announcements that I'm going to have today. In fact, this whole lecture should be subtitled, um, How to Protect Your DNA. Okay, so number one, no belly shots out in the sun. Do it indoors. Um, let's talk about, uh, the processes that are enacted when, uh, DNA is damaged. Okay, so in order to divide, cells go through a, uh, complicated cycle. And it's a little bit like the military in that, um, all cells have to either advance up the ranks or they're called from the population. They must either advance or die. And so, um, cells have a cell cycle. The timing of the cell cycle depends on the cell type. Some cells, as you know, are dividing much more rapidly than other cells. Um, the cell cycle is depicted, uh, briefly here. And again, I'm giving you kind of a 10,000 foot view. The book provides more detail than other classes provide even more detail. So, um, at some point in the cell cycle, uh, the cells have to, in order to, um, to, um, be able to divide, the cells go into a synthesis phase when DNA is being replicated. Uh, and during this process, there are a number of checkpoints and other sort of sensors that check to see whether or not the cell is ready to advance to the next stage. Uh, in this case, the G2 stage of the cell cycle. These checkpoints, um, are highlighted by these red arrows over here. Okay, so many anti-cancer drugs target this S phase, uh, of the cell cycle. For example, you can imagine something like sorrelin would massively disrupt this S phase of the cell cycle. The DNA is cross-linked. Two strands cannot be separated, which is a requirement for, uh, DNA replication. So, um, let's talk about, uh, how many of these compounds work. I've shown you the 2 plus 2 photocatalyzed cyclization. Um, a more common mode for, um, uh, reaction with DNA involves using DNA simply as a big nucleophile. It turns out that the DNA bases are hugely nucleophilic. Okay, the backbone negatively charged, not so nucleophilic, but the bases themselves, lots and lots of opportunities to act as a nucleophile. Um, the reason for this is that there's lots of lone pairs that are, that are orthogonal to these aromatic rings that are sticking out from the aromatic rings, like this one over here on this adenine function, adenine base. Okay, so where this arrow is pointing, there is a lone pair that is sticking straight out, um, away from the ring. And if that lone pair is not participating in aromaticity, it is totally ready and available to react with any electrophiles that it happens to encounter. So for this reason, many, many electrophiles are, um, fantastic carcinogens. Okay, if we look at carcinogens as a class, um, they are largely, uh, electrophiles, there's some other classes as well, but in general, you want to avoid electrophiles if you can. Okay, and I'm going to show you some examples. Here's an example, um, that I showed, uh, in an earlier slide. Okay, so in an earlier slide, this was, uh, from Thursday's lecture, when I showed you that you could treat fruit flies with random mutagens, and that would result in, um, I forget what it was, I think it was like extra eyes that were growing out of the fruit flies. Um, the way that, that reaction worked is here is the nucleophile at DNA attacking the sigma star bond between this methyl group and this good leaving group of a sulfonate. Okay, and so what happens now is this DNA has been modified, right? You now have a new methyl group over here, and the problem with that is now, instead of having a lone pair there that, uh, transcription factor can recognize, now you have a hydrophobic functionality that might be going out to recruit transcription factors improperly, and so it might start turning on genes that otherwise might not get turned on. Okay? Yeah. Um, here's some examples. So, uh, here's again the structure of DNA, that's the B form of DNA, this is the major group, this is the minor group, um, the arrows are pointing to the most nucleophilic of the, um, of the lone pairs, and all of the most nucleophilic lone pairs are lone pairs that are on aromatic rings but are not participating in aromaticity. Okay, notice that both of these rings have six pi electrons, two, four, six, two, four, and then six in this lone pair over here, this lone pair, the one that has an arrow pointing to it, and this other one with an arrow pointing to it, don't participate in aromaticity. Okay, they're, they're, they're kind of spectators in the whole aromaticity business, and because they're not participating, they're really available to act as nucleophilic, they're super nucleophilic nucleophiles. The problem, of course, is that, um, they also have a role to play in terms of encoding sequences, and that role can be messed up when they are modified. Okay, so it turns out, and I don't want to panic you too much, it turns out actually, um, even if you encounter a really large amount of, uh, electrophiles, you're not automatically going to get cancer. Okay, and there's actually an absolutely fascinating case study on this. Um, in this, uh, in this case, um, let's see, uh, so, uh, in the case that we're going to, uh, we're going to see, uh, you know, uh, there was an ex-boyfriend of this woman here who wanted to poison this family by giving them all cancer, and, um, he unfortunately had access to chemical mutagens. He worked in a laboratory, um, I believe in Kansas, and he had access to this compound over here. And so he started, uh, or actually, sorry, it's not this compound, it's another, uh, simple, uh, electrophile. And, um, what, uh, what he did was actually add large quantities of, uh, an electrophile to milk and to lemonade, uh, that he found in the refrigerator. Okay, so he stops by the house, he looks for an opportunity, he goes to the refrigerator, and he dumps in large quantities of these chemical mutagens, shakes this stuff up, uh, and then, uh, and then disappears. Okay? And then later, um, when other people come to visit, uh, she serves them this cold lemonade that has the mutagen. Okay? Oh, oh, here it is. Here's a picture of him. Uh, this is Sherry's former boyfriend. Here's Sherry again. Um, what he served are what he, oh, here's a picture of his, uh, you know, disgusting little chamber of whores. Um, but what he did was he actually, uh, gave them this, these N-nitrosoamines, which are coded DNA-outplating agents. And why don't we take a look very briefly at, uh, the mechanism of this. Okay? So this is the nitrosoamine, and in the liver, it goes through an interesting activation, uh, uh, step, where eventually it gives you this diazo compound shown here. Okay? So notice that this has nitrogen as a leaving group. Okay? So, um, earlier I showed you sulfonate as a leaving group. We can't get any better than nitrogen as a leaving group. Right? This is N2, bubble's off as gas, fantastically stable leaving group. It does not get any better than this. Notice too that the electrons get to bounce their way over to a positively charged nitrogen. They love doing that. Okay? That's, you know, that's electron paradise. And so, um, this makes us a very, very potent DNA-outplating agent. Okay? This is obviously something that you don't want to ingest. All right, let's get back to our story. What happened to the family over here? Um, it turns out that they got, um, uh, they, uh, they died. Uh, actually the, uh, Dwayne Johnson and his infant nephew, uh, began bleeding inside, uh, and just basically died in, uh, tremendous agony. However, they didn't get cancer. Okay? So they died. It was horrible, but they did not get cancer. Okay? So, um, none of the victims got cancer. Not all of the victims died. There's this fascinating book called Toxic Love that can tell you more about this. Um, here's what we learned from this episode. Okay? So, first, DNA is a terrific nucleotide. On the other hand, it's very hard to convince the cells to induce cell death. And it's really hard to cause cancer. Even when you have enormous quantities of these non-carcinogenic materials. Okay? So that there's enough sort of error correction in your DNA, uh, machinery, your, uh, and your cellular machinery to fix, uh, basic, uh, changes to the DNA, to fix modifications to the DNA. I'll show you that on the next couple of slides. This is great news. Okay? This means that you can all safely walk back to your dorm rooms, uh, knowing that your DNA is getting cross-linked by the 2 plus 2 photocatalyzed life-cycleization that I'm showing over here, and also knowing that you're not going to die of that. Okay? So your cells are very good at fixing, uh, these types of affronts to the DNA. Okay? And in fact, actually, this stuff is totally ubiquitous in our daily lives. How many people have pretty bacon for breakfast? All right. Uh, anyone else? Sausages? Anyone have sausage for breakfast? Oh, what do you say for breakfast? Okay, well, at least one person had some bacon. Um, I'm a big bacon fan myself. Uh, here's what happens, though. The problem is that, uh, sodium nitrite is used as a preservative for, uh, bacon, uh, for bologna, for sausages, et cetera. And, um, the problem is that, uh, by, um, heating and also, um, other acid-catalyzed reactions, which can happen in the stomach, um, this sodium nitrite can get, uh, rearranged to eventually form a terrific, uh, electrophile. Okay? So as the primer, preservative, and package needs, you get this fantastic electrophile. Uh, the, uh, good news is that this is probably not something that we have to spend a lot of time worrying about, because at the same time, vitamin C, which is also ubiquitous in our foods, um, can actually reduce this. Um, it's an antioxidant, meaning it reduces things. It can reduce this electrophile, giving us a much more harmless molecule, nitric oxide. Okay? So here's a sort of acid that's vitamin C, and it can react very quickly with the, um, the, uh, this reactive nitrite, and prevent the formation of nitrosoamines. Okay? So here's the nitrosoamine over here, and for the reasons that are shown here, this is why you do not want to have nitrosoamines around. Again, this is a nitrosoamine, and eventually will rearrange to give you a great DNA-affiliating agent. But vitamin C reacts with the nitrite, preventing the formation of those nitrosoamines. Okay? So the thing that I'm trying to tell you is that, um, DNA will do a lot of reactions with a lot of molecules, but, um, there's no reason to panic. Okay? Because the chemistry in this stuff is remarkably complex. There's a lot of other molecules around. And furthermore, the cells have evolved really, uh, effective mechanisms for repairing, uh, damage to the DNA. And we'll take a look at that on a future slide. Um, before we do, let's take a look at some carcinogens that aren't found in our daily lives. Okay? At least I certainly hope not. Um, many of these are used, uh, to treat, uh, cancer, um, various kinds of cancer. Not so probably now, but this was sort of our front lines of cancer, um, anti-cancer, uh, treatments, uh, some time ago. So, um, these classes of compounds are known as DNA-alkylating agents. Um, for the reason that these are electrophiles that react with DNA and leave the nucleophilic DNA alkylated. So they're going to come along and modify the DNA. Um, some of these are, um, ones that are kind of familiar to us. Okay, so these nitrosoureas over here have a similar reactivity to the nitrosoamines that I showed a couple of slides ago. Others, like these nitrogen mustards, we'll talk some more about. Okay? Or the cyclo-posphor min. Okay. So, um, let's start at the top. Miesolates. Miesolates, great leading groups. Okay? So this compound, Bucylpane, can alkylate the DNA. Notice that it has not one but two Miesolates. That's a problem, because that means then it can react with both strands of the DNA. And that's always bad news, right? Because now the DNA is cross-bred, right? It has the two strands that otherwise should be held together by hydrogen bonding are now covalently welded together permanently or semi-permanently. Um, similarly, this chloromethal ether also potential cross-linking agent, right? Two leading groups, uh, two chlorides which can act as leading groups. Monchloride, it has one leading group. Okay, bad news. Alkylates DNA gets into cells very effectively. Um, these are starting to look like compounds that you encounter in the laboratory, right? We routinely use, for example, chloroform, methylene chloride. We use lots of, uh, we use lots of alkylhealides, for example. Okay? These are things that you encounter in insecticides. Um, let's see, methyl bromide, for example, is routinely used to treat grapes. And it's used for tending houses. You know, when you see a tent over a house for termite control, that's actually, it's working using, uh, a DNA alkylating agent that's sprayed into the tent. Um, okay, these guys over here, the nitrogen mustards, we'll talk more about in a future slide. Um, this one, again, is just like the, uh, nitrosoamines that I showed on the previous slide, which is to say this eventually rearranges to give you a, um, a, uh, diazo-leaving group. Okay. Now, here's the thing. Many of the compounds I showed on the previous slide are also used as anti-cancer drugs. And the idea there is that you're going to, um, give these to patients and hope that the cancer cells, which are dividing very, very rapidly in the patient, more actively and more rapidly take up the drugs than other, um, normal cells in the patient. Okay? And for the most part, this works. Okay? I mean, it's true that these things are also incredibly toxic and they'll, uh, you know, they'll kill other rapidly dividing cells in the, in the four patients, but at the same time, um, they are preferentially loaded into cancer cells. Okay? So the compounds I showed on the previous slide also appear in, um, many, uh, anti-cancer compounds. Okay? So, um, this is the dimesolate compound. This is a, um, uh, a combination of a nitrogen mustard and also a nitrosourea. Um, this is a nitrogen mustard. Um, and these all have little niche markets. For example, this nitrogen mustard is used to treat a rare form of cancer that's found on the skin, that specifically ends. And, um, it's a tiny little market. It might be $50 million a year, but it's used as an ointment. And so, um, patients who have this will get this ointment, rub it on their hands specifically in the area and use this to, um, kill any rapidly dividing cells that, uh, happen to be in that area. Okay? And that's actually kind of nice because then it's specifically targeted just to that area as an ointment. Um, uh, paracelsus, who is an absolutely fascinating character in the history of science, um, had this great quote that goes something like, uh, and I won't read it to you in the German, because my German is not up to us. It basically says, you know, show me any compound and I'll show you two sides to it. Um, one side, it's a toxin. The other side, it can be a treatment of cure. Um, all this depends on the dose. Okay? So at high doses, this can be a toxin. At low doses, these things could be used as treatments. And that's sort of the paradox of these things. So the goal of chemotherapy is to induce cell program suicide called apoptosis and rapidly dividing cells faster than you induce new carcinogenicity in normal cells. And that's the big, that's the big challenge for anti-cancer treatments. At least up until the last, I'd say, 15 to 20 years ago. Okay? And in the last 15 or 20 years, chemists have started coming up with compounds that are much more specific at targeting what makes cancer cells cancer cells. Okay? The compounds I'm talking to you about today are ones that simply obliterate any DNA that they have in their mind. And the future really is targeting the specific attributes of cancer that makes that, that cell a cancer. Okay. So let's talk a little bit about how your cells repair themselves after an affront. You know, you're walking home and you decide to chat with your friend out in the sun. And so you end up with some, you end up with some cross-linking findings. Okay? This is the photocatalys 2 plus 2 that I'm showing over here. Okay? The way this works is that your cells have a constellation of feet of DNA repair proteins that are constantly circulating. Okay? The analogy would be tow trucks that are driving around on the 405 looking for broken down cars that are disrupting traffic. And then as soon as they find one, they just pull it off the road rather than disrupting traffic. Okay? So you have these DNA repair enzymes that are constantly scanning your DNA looking for things like cyclobutane atoms and looking for this sort of affront. When they find this, they don't simply sniff out the mistake over here. Instead, what they do is they sniff out a big segment of DNA, a big chunk, 10 base pairs on this side, 10 base pairs on that side. And this part is excised using a restriction enzyme. And DNA polymerase comes along and fills in the correct sequence of DNA. So it's an excision repair mechanism. Okay? That you chop out a big chunk and then DNA polymerase moves in. The reason you can't simply remove just those couple of bases is that DNA polymerase likes to get a running start. Okay? It doesn't work so well just on a couple of bases. It really needs 20 or something before it can start cranking along. It just doesn't, it's not, it's a professional. It doesn't like to do small jobs. Any questions or what we're seeing? All right. Okay, so mono-functional alkylating agents, things like this mom chloride compound that I showed on a previous slide, are relatively harmless. Okay? They're not super-lethal because they can be so readily fixed. This is not to say that you should go out and start inhaling mom chloride. Okay? You want to stay as far away from that stuff as possible. Here's the mechanism. In this mom chloride, what's happening is you're actually forming an oxonium ion intermediate, and this oxonium ion intermediate is a fabulous electrophile. Right? Notice that the oxygen bears a positive charge, and we know oxygen, by virtue of its position on the periodic table, does not like having positive charge. It's, it's an electronegative. And so this makes us an exceptional DNA alkylating agent. And here it is attacking or being attacked by the nucleophilic DNA. And again, this can be readily fixed using excision repair. And the problem is, if you have too many changes, you can actually end up overwhelming the repair system. So if there are too many places that are alkylated, then it's just hard for the DNA repairment to keep up, it's just too much to handle. The other problem is that some DNA alkylating agents hit not one, but two strands of DNA. And this excision repair mechanism assumes that you have a second strand of DNA that's available to act as the perfect copy and to provide a template to fix the broken strand. What happens if both the strands have been damaged, say by X-rays or something like that? You're in trouble. That's a real problem. Okay, so bi-functional cross-linking agents are insanely toxic. Bucyltane is a lot more lethal in damaging DNA than two equivalents of methyl-desolate. The monofunctial DNA alkylating agent that we've been seeing today. Okay, so cross-linking DNA is far more damaging than monofunctional adducts. And again, that's because the cross-linking can hold together the two strands of DNA and prevent you from having the template strand to act as a copy during DNA repair. Okay, so here's a couple of examples of this. And this is, in this example, this is an adeniprosourea. What's important about this is it gives us, again, the diazoleven groups. That's a nitrogen azoleven group, this diazo compound. Analogous to what we saw with the other nitrosoureas. But then this other chloride can hang along, form this adductive DNA over here, and the second strand of DNA can attack this. And this gives you now two strands of DNA bridged by an ethyl functionality. Okay, so now they're covalently held together. And again, this is bad news. Okay, excision to repair works pretty well except when it encounters these double-stranded breaks, these sort of double-stranded, you know, where you get cross-linking or double-stranded breaks. For that matter, excision repair can also be lethal if it's not, if there's a genetic abnormality that includes a protein that's no longer functional. Okay, and there's actually a fantastic movie about this called The Others by, and includes a great performance by the whole kidmen. Has anyone seen this? Okay, spoiler alert. The kids have this disease in which they have abnormal DNA repair enzymes. If you haven't seen this, you should still see it. I haven't totally ruined the movie for a couple of minutes, but I'm not totally. Okay, so missing or deficient DNA repair enzymes causes severe disease. Okay, we again, we have kind of a graphic image on the next slide. So the disease Xeroderma pigmentosin, for example, is caused by an incorrectly encoded DNA repair enzyme, and this poor girl has this. And the problem is, again, you're constantly being confronted with damage to your DNA, even when you're outside in the sun, UV light causes the DNA to be cross-linked by this photocatalized 2 plus 2 photocyclization. You absolutely require functional excision repair enzymes. Without those, you cannot live on our planet. There's just too much damage going on. And otherwise, you end up with this terrible disease. Okay, so let's get back to the story with the crazy guy, the psychopath who tried to murder the family by giving them all cancer. How come none of them died of cancer? It turns out that when we look at cancer cells, we find hundreds, if not thousands, of mutations to the DNA. Okay, so when we start sequencing tumors, we find a tremendously heterogeneous mixture of mutations, where each tumor is slightly different. And furthermore, when we take biopsies in different spots of the same tumor, we also find tremendous differences amongst those cells which otherwise superficially look identical. Okay, so to form a cancer requires hundreds, if not thousands, of mutations to the DNA. It takes an enormous change to the DNA to do that. Furthermore, those mutations have to target three different control aspects of the cell. And I like to think of these as an accelerator, a clutch, and the brakes. Okay, I'm a car guy. So for me, this analogy works really well. If you don't drive stick shift, you might not know that back in the day when all cars were manual, they had a clutch. The clutch was this extra pedal that you would push down to switch gears. Okay? And earlier I showed you that cells have to progress through a cell cycle, and that there were checkpoints through that cell cycle. So I like to think of those checkpoints as shifting gears. Okay, so if the cell cannot shift between different phases of the cell cycle, it cannot proceed. However, if you have mutations to those checkpoints, to the proteins that are controlling the test for whether or not the cell should be allowed to progress, you could end up with mutations that allow the cell to inappropriately progress in the cell cycle, allowing runaway cell division. Okay, so those are mutations, the clutch, which are the checkpoint proteins, often kinases. In addition, all cells have on the cell surface growth factors and growth factor receptors, which are responsive to the external environment. Mutations to these growth factor pathways are absolutely required. That's like a mutation to the accelerator. These growth factors are telling the cell, start dividing, start producing this, start doing this, get going on this. So that's the mutation to the accelerator. And finally, all cells have tumor suppressor genes. And we'll look at a moment at one called P53, which I like to think of as the brakes. This is the machinery that shuts down the cell when it starts running out of control. And when this happens, these tumor suppressors can trigger apoptosis, cell suicide, that prevents cancer. Okay, so these hundreds to thousands of mutations have to affect all three of these pathways. If just one is affected, then there's a good chance that the cell will default into apoptosis and arrest and prevent the cancer. If on the other hand, you have mutations to all three, then pathways can start running out of control. Okay, so let's see. If we take a very brief look at P53, which is the tumor suppressor protein, here's the cell, here's the cell destruct button, and there are a whole series of different questions the cell is constantly asking before he goes into cell division. And at any given time, if answered to any of these questions, is it big enough? Is there enough room? Are there two copies of DNA? Is the DNA properly lined up? If the answer is no, the cell will be immediately sent into apoptosis, where it basically blebs apart. Bleb is a fancy word for explode. Okay, it starts bleeding out these blobs off the surface. Bleb and blob are two great words. Okay, they sound like each other because that's really what's going on here. So P53 is a hair trigger sensor that can turn on apoptosis. And mutations of P53G allow DNA damage to accumulate. So mutations of P53 are bound in like 60% of colon cancers, for example. This is a very, very common set of mutations that are behind a large number of different cancer types. And in fact, it's almost mandatory for some cancers to get going. Because a functional P53 will shut down most of the runaway cell division that would otherwise cause cancer. Okay, so for this reason, you know, things that you do that affects the P53G are really, really serious. And for example, scientists have been able to link mutations to P53 directly from smoking. Okay, so if you're out smoking, there's a good chance that you're starting to cause mutations to P53. This is literally the smoking gun. Okay, this is what causes the link between cancer and smoking. Which until when? Until this was done in the mid-90s or so? Cancer companies. No, cigarette companies were able to promote their cigarettes, as saying, there is no definitive evidence that cancer causes smoking. Here is the definitive evidence. What happens is, you end up with mutations to the residues that are highlighted in yellow. So these are residues that have a positive charge and interact with double-stranded DNA. Notice that P53 does a lot of things in the cell. One thing it does, quite importantly, is act as a transcription factor to trigger apoptosis. And if these positively charged residues are mutated, the P53 can no longer bind to the DNA and it can no longer trigger apoptosis. So smoking affects P53 and introduces mutations specifically at the molecular level to these residues here, and that in turn results in cancer. If you're smoking now, you should stop. If you know people who are smoking, you should persuade them to stop. I cannot think of anything worse. My father died of lung cancer. I am passionate about this. Stop smoking now. You might as well just put your mouth up to like the exhaust pipe on a bus or something. It's as crazy as that. All right. Now, what is it about smoking that causes cancer? We actually know quite a bit about why it is that unburned things cause cancer, and it goes back a long ways and a long history of biology. And I guess we have to go back to the father of epidemiology, the great Sir Percival Pot. Percival Pot, shown here, was a physician in London and he noticed that his chimney sweeps had a ridiculously high level of testicular cancer. So chimney sweeps are these guys who in 18th century London would actually climb into chimneys. This is the Santa Claus. This is actually real humans that are climbing into the chimneys as a way of cleaning them out. They carry brushes that look like this and they would just be covered in soot. Okay, because they're in the chimney where there's all this sort of unburned sooty stuff. In the unburned sooty stuff, what we find are chemicals that cause testicular cancer and other types of cancer. And I have a picture of testicular cancer on the next slide. Sorry, I couldn't resist. If you have an aversion to this sort of thing, avert your eyes. I'm convinced though by showing you these images that at least one of you, one of my students someday is going to benefit from this. I'm really hoping that I'll prevent at least one person from, or maybe I'll help one person detect cancer very early. Okay, so here's a picture of a cross section through a testicular, check out the size over here. This is the centimeters. So this whole region in here is one giant cancer that has grown. So you end up with these tremendous growths of cancer in testicles. Now here's the good news. The good news is if it's caught early, this is totally treatable. This is the sort of thing that if you catch it early, it can be stopped by both chemotherapy and also surgery. And so men in the audience, you should all be thinking about monthly self-tax exams and get more information here. Again, this is another one of my PSA announcements. I really hope someday I'll have prevented someone from dying. That would be the coolest thing. Okay, so the testicular cancer. Oh, okay, another picture. Okay, this one's a little bit doryer. These are actually, these are the cancers highlighted over here. It's a little hard to see in there. But again, this is totally treatable. Okay, so first of all, Poc is noticing that in London, a very high percentage of his chimney sweeps are coming down with cancer, testicular cancer. And he's wondering, what is it about the sut? He hypothesizes that the sut is what's causing this testicular cancer in chimney sweeps. We now know that in unburned carbon or unburned stuff, there are a lot of carcinogens. And I want to show you that on the next slide. Okay, so this benzopyrene is found in high concentrations in unburned carbon. Okay, so question over here? Oh, let's see, that was a picture of an operation to remove cancers. That's not to look at it too closely. We'll talk more about it later. I'll see you in a second. All right, so these compounds over here are definitively cause cancer. Okay, so for example, if you take benzopyrene and you rub it on the back sides of these rats, they come down with these horrible cancerous lesions, those bumps over here. And these work by mechanisms that you can predict. Right, these are flat aromatic compounds. How do they work? Flat aromatic? Yes, on three, let's do it together. One, two, three. Turcolators. Turcolators, yes. So these flat aromatic compounds fit into the DNA, they slide straight into the pie stack. But in addition, they also can outglow the DNA. And this is less obvious, and I want to show you this on the next slide. Okay, so when you smoke cigarettes, or for that matter, when you smoke alternative things, there are unburned bits in there, and those unburned bits are benzopyrene. Okay, and the benzopyrene that gets into your liver is epoxidized. Okay, so your liver will try to process this stuff. Okay, so you can imagine this stuff is not very soluble in water, and it's really bad news to have insoluble fragments of stuff floating around your bloodstream. So the liver does its very best to deal with this, and the liver's strategy for insoluble matter is to oxidize it and introduce hydrophilic functionality that would make it soluble in water. Okay, and so here is an enzyme in your liver oxidizing the benzopyrene. Here's a successful oxidation to make a dial. Here's a second oxidation that creates instead of a dial an epoxide. Epoxide has a strained three-membered ring. This strained three-membered ring is a fantastic electrophile. Okay, so now this is really bad news. You now have this interglator that's sliding into your pi stack that has the perfect alkylating agent, electrophile, delivered right up against your nucleophilic DNA, and that's really the problem. Okay, this is an enormous problem. What ends up happening is the DNA gets modified covalently by this benzopyrene. Okay, so for this reason, you know, countries that eat a lot of barbecued food, that have a lot of sort of burn-dust stuff in their diets, tend to come down with high levels of stomach cancers. Okay, likely because they're counting benzopyrenes as a diet. Your goals should be to try to eliminate benzopyrenes as much as you possibly can. There's lots of stuff out there that people are paranoid about that they think is bad for them. Here's one that we genuinely know is bad for you, and here's something that you can do to help yourself live a lot longer simply by avoiding it. All right, here's another one. This is apllatoxin. Apllatoxin is produced by molds that grow on grains like peanuts. Okay, and in the United States, all of the peanut butter is tested for the presence of this mold. This eschewagillus, eschewagillus, eschewagillus, flameless mold. But in other countries where the public health systems aren't, and the sort of food safety mechanisms aren't as vigorous, this is more of a problem. But the problem is the mold looks like this, and it gives off this apllatoxin, which again is modified by cytochrome P450. I should have called it. This is cytochrome P450 in the liver, and that introduces an epoxide. This epoxide is a fantastic electrophile or nucleophilic attack of the DNA, and once the DNA is modified, this eventually leads down to DNA strand cleavage. So modified DNA eventually leads to DNA that's been chopped apart. That DNA that's been chopped apart is no longer available, and if you get hundreds of these mutations, eventually you get cancer. Okay, bad news. Alright, let's talk about the nitrogen busters. So earlier I showed you compounds that had a nitrogen, ethyl group, and a chlorine. Here's a variant where it has a sulfur instead of a nitrogen. So both of these compounds, they both go through a common mechanism, which is the heteroatom in the center of the compound can act as a nucleophile to form a terrific electrophile. And in this case, the nucleophile attacks and is alkylated. If this nucleophile is DNA, then the DNA can be cross-linked, giving you two strands that are covalently welded together. Okay, so earlier I showed you, for example, I believe it was used sulfran. The compound I told you about, that you rub on your skin for this very rare type of cancer. It's used as a chemotherapeutic. So these were also used as war gases in World War I, which is completely insane. You would actually do this to anybody on the planet. But in any case, these compounds cause cancer by forming fantastic electrophiles, which then react with the nucleophiles found in the DNA. Okay, so again, here is the nitrogen mustard equivalent. And these are compounds that we saw earlier in today's lecture. And all of them go through a common intermediate, this aziridinium ion over here. And notice what a great electrophile this is. The nitrogen has positive charge on it, and nitrogen hates having positive charge. So when the nucleophilic DNA attacks here, the electrons get to bounce their way to the positively charged nitrogen, setting up sort of the perfect electrophile for modifying DNA. Okay, any questions about the nitrogen mustard, the sulfur mustard, DNA-alculating agents? Okay. I want to switch gears now, and I'm going to take the last five minutes to start talking a little bit about RNA, just to kind of wet your appetite as we talk about RNA. Okay, there are no other questions about DNA, right? Questions about cancer, anything like that? So RNA, chemical biologists have come around to recognize RNA's tremendous importance in cell biology. As an example of this, this is actually a structure of DNA that was posited by B.B.S. LeVine, who I'm going to show on the next slide. And, you know, this is around like 1920 or so, and a structure of DNA and RNA was not very well understood. Okay, so he posited that this DNA structure was a structure of DNA. Of course, this is totally wrong, right? We know what the correct structure of DNA is, but what LeVine did right is he also assigned pentose ribose sugars in nucleic acids. And he specifically showed there was deoxyribose in the DNA and also figured out that DNA is a nucleoside to denote the glycosidic bond. Okay, so it's possible that science could be totally wrong, but get some details correct. Okay, so this structure to us looks totally nuts, right? It's only, you know, four bases of DNA, arranged in a circle. But on the other hand, when you look more closely at the details here, you start to find all kinds of interesting features. Like, for example, that the bases are held on by glycosidic bonds, that this has a phosphodiester backbone, that the connectivity here is 5' to 3' etc. Okay, so in a similar way, we chemists have had sort of a reevaluation of the importance of RNA. When it was originally discovered, it was thought to be merely a transfer, sort of a go between DNA and proteins. And in the last 20 years, our appreciation of RNA's key role in the cell has expanded enormously. Okay, so we now know that RNA can act as a soldier, sailor, and tinker in a supply. And first, as a soldier, RNA is actually a very effective catalyst for cleaving its own sequences. It can actually go out and cleave sequences of other RNA strains. As a sailor, transfer RNA delivers amino acids to the ribosome. And then, as a tinker, the ribosomal RNA can act as a catalytic machine to synthesize proteins. And finally, as a spy, messenger RNA encodes proteins. So RNA is capable of all kinds of things. And this means that when we come back on Tuesday, we're going to have a lot to talk about. And I'll look full of that in a minute. Okay, so RNA is capable of all kinds of things. And this means that when we come back on Tuesday, we're going to have a lot to talk about. Thanks.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Instructor: Gregory Weiss, Ph.D. Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:06:40 DNA Chemistry 0:09:48 Cutting and Pasting DNA 0:18:42 Protein Modification by PCR 0:23:15 UV from Sunlight Cross-Links Thymines 0:25:09 E. Coli Photolyase 0:26:36 Protecting Your Cells By Sun Screen 0:33:24 Cells Must Advance or Die 0:34:57 DNA as a Big Nucleophile 0:39:51 N-nitrosoamines = Ptent DNA Alkylating Agents 0:45:00 Known Carcinogens 0:47:44 Paradox: Cause and Cure? 0:50:42 Excision Repair in Humans 0:57:25 Hundreds of Mutations Required to Cause Cancer 1:03:57 Sir Percival Pott 1:05:19 Testicular Cancer 1:06:56 P450 Substrates as Potent DNA Alkylators 1:08:24 Benzopyrene: Pro-Epoxide DNA Alkylators 1:10:18 Toxins from Molds Growing on Grains 1:11:36 Nitrogen and Sulfur Mustards 1:13:35 The Re-Evaluation of RNA's Importance
10.5446/18862 (DOI)
Welcome to week two of chemistry 128, Introduction to Chemical Biology. I'm Professor Weiss. I'll be talking to you today about reactivity. Okay, so last week we talked about the molecules that compose your cells and our goal this week is to understand how those molecules interact with each other. There are two forms of this interaction. The first kind is that the molecules can decide to react with each other. They can start to form covalent bonds. Bonds can break, bonds can form. So we want to understand this property that we're going to call reactivity and to understand this we're going to look at arrows and the language of arrows which organic chemists have developed as a way of communicating this reactivity. I have to tell you, I think that this is one of the great achievements of organic chemistry. This is one of those accomplishments that all humans can be proud of because it reduces something that otherwise seems mysterious to a simple set of rules from which you can derive many, many reactions, essentially all reactions found on our planet. And to me that's really exciting because that means that this language is universal and it's one that's very broadly applicable. And so that's my bias going in is that I think this is really cool. Okay. So we're going to have a quick review of arrow pushing and then I'm going to show you examples of applying this language of arrow pushing and this language of reactivity and chemistry to the chemistry that's found on our planet before life started. This is a type of chemistry called prebiotic chemistry. Now obviously there are no humans present to observe directly what was going on. However, we can infer what was going on in this prebiotic period and this is an artist's conception of what the planet might have looked like from both a fossil record and also from experiments that attempt to recreate the conditions that were found during that prebiotic period. Okay. So we're going to be using what we learned to really to look at synthesis of the molecules that compose a cell. And then the next topic we'll talk about this week is making molecules using a combinatorial approach. This is essential in chemical biology. This combinatorial approach takes place in your cells. It's one of the reasons why your immune system can very rapidly respond to foreign invaders and it also is used in many chemical biology laboratories around the world. And so for this reason I have to introduce this concept of combinatorial chemistry and combinatorial biology to you this week. And then finally we'll look at the second mode of molecules interacting with each other. Recall at the start of this I said there were two modes. The first mode being reactivity that forms covalent reactions that results in covalent changes, bonds forming and breaking. The second mode is non-covalent interactions. This is when two molecules slide alongside each other and decide to form a complex with each other. And the rules that determine whether or not this complex forms are also rules that we can understand. And importantly this is also a really tough frontier for chemical biology. So while I'll be able to tell you about the rules for reactivity and covalent bond breaking and bond forming reactions, I cannot reply with such certainty when we start talking about non-covalent interactions. There's a lot less that we understand and that makes it one of the challenges. But at the same time it also makes it really exciting because that means there's opportunities for people like yourself to get out and do new experiments to start to elucidate those types of rules. Okay, so I have some announcements before we go in. That's kind of the overview. Let's zoom down and look at the particulars. First for this week I'd like you to read chapter 2 in the textbook. That's this book here. Now there's occasional times where the treatment in the textbook is more advanced than what I'm talking to you about. For example there's information about inversion of phosphate geometry, phosphorus geometry. I'm not going to discuss that. And if I don't discuss it in the lecture then don't get to hung up on it in the book. Okay, so simply skim the concepts that are not presented in the lecture. Okay, so if I don't talk about a lecture it's not important for the class in terms of our exams and what I'll be testing you on. So simply skim through it. Homework, I'd like you to work chapter 2 problems in particular every odd problem. And there will be a worksheet to guide our discussions this week which will be posted to the class website. In addition there will be one handout this week which will be posted to the course website. Please download this on Tuesday to skim through it. This handout is an example of a journal article report which I used to call the book report and then on Thursday I'll discuss it with you in further detail. Okay, at this point I would usually ask if you have any questions, if you do have questions then you can either email me or the TAs. Okay, so let's review where we've been and then we'll get started on what I told you about at the beginning as our big picture. So we want to understand the function of human cells at the level of atoms and bonds. This is the smallest unit that actually is meaningful to us as chemists. And as I described to you last week cells are bags of molecules. They are bags that are chock full of molecules. The molecules are stuffed inside the cells. There is no elbow room. These things are jam packed into the cell. So because of that we expect lots and lots of interactions which is our topic for this week. But I'm getting ahead of myself. Let me continue reviewing what we talked about in the previous week. First we talked about the composition of a gene as an on-off switch with instructions. We talked about how molecules are synthesized in the cell using the template of DNA to a messenger RNA which then is translated into proteins and then proteins and RNA carry out all the various instructions that are articulated by the DNA. We also discussed six types of organisms. But in this class we're going to be generally talking about either bacteria or human cells. And it turns out there's a lot of chemistry in just bacteria or human cells. So our goal in this week is to reduce the complexity of diagrams like this down into a few rules that chemists like ourselves could understand. Okay. So let's get started with what is life? What is the stuff? What are the molecules that compose cells? What are the rules that govern them? In 1948 the physicist Erwin Schrodinger wrote a very influential book called What is Life? I highly recommend this book to you. It's a slim little volume and it's a fun read. It's not particularly challenging. But the concepts that he presents to me are really earth-shattering. These are paradigm changing. What Schrodinger argued is that the molecules that govern your cells that allow organisms like yeast and bacteria and humans that allow organisms to live, those molecules are governed by physical laws, by the same laws that we talk about in chemistry and physics classes. There's nothing special or unique about the molecules found in living organisms. They are simply molecules that are governed again by physical laws. So this persuaded, this book persuaded a generation of physicists to explore biology after World War II. This was an amazingly influential book. And this persuaded this generation that include great scientists like Francis Quickrick and Jim Watson and many others to explore biology and to do this by applying concepts from physics and concepts from chemistry. And the results are more or less what I presented to you last week when we talked a little bit about the structure of molecules. So this is good news for us. The good news is everything that you've been learning about in chemistry classes before now applies to biology. There's nothing special about biology. There's no sort of life force that animates molecules found inside the cell. No, rather the same rules that you learned about in general chemistry that you learned in organic chemistry, those apply to the molecules found inside your cell. Okay, so let's talk a little bit about those molecules found inside your cells. So our goal is to understand first the reactivity of those molecules and then second we'll talk about their non-covalent interactions. So covalent interactions, reactivity first. In organic chemistry you learned the powerful language of arrows which are a way of depicting the overlap of molecular orbitals. Let me remind you of some conventions of those arrows and the conventions of this language of organic chemistry. So the first of these is that these arrows depict the overlap of molecular orbitals so that they show, for example, electrons in a highest occupied molecular orbital overlapping with the unoccupied lowest energy molecular orbital of the second reactant of the reaction. Okay, so in this basic reaction we have an amine, we have a ketone and the two of these are going to be reacting with each other. So if you take amine and you take ketone and you mix them together we can predict in advance that a reaction will take place. And here's why. What we can predict is that the lone pair on the nitrogen is going to be highly reactive. Why is that? What's special about that lone pair? It happens to be very high in energy. It is a highest occupied molecular orbital. And it's going to want to react with the pi bond, this carbonyl functionality of the ketone. What's so special about the carbonyl functionality of the ketone? Well, it happens to have a low energy unoccupied molecular orbital. Okay, now let's break down what these molecular orbitals actually look like. What this looks like is the lone pair of this nitrogen on this nitrogen is found in an n orbital. So it's in a high energy state. It's the highest occupied molecular orbital, the HOMO. And it's going to be overlapping with the lowest energy molecular orbital of the carbonyl of the ketone, which happens to be the anti-bonding orbital of the pi bond. Okay, this is what it looks like in terms of molecular orbitals. And this is what it looks like on top in terms of organic chemistry and organic chemists speak. Good news, we organic chemists have agreed to the convention that we will depict complicated reactions like this one using this simplified descriptor. Okay, and this is good news. I don't think anyone wants to spend lots of time on the test deriving what these molecular orbitals look like and trying to describe an anti-bonding orbital in terms of the lobes and so on and so forth. It would be just way too complicated. So we're going to be using this description here. Now the real challenge for us comes from the fact that the molecules that we talk about in biology often times have multiple functional groups. It's not atypical for a biomolecule to have say hundreds if not thousands of carbonyls or to have thousands upon thousands of different lone pairs. So the real challenge is for us to figure out which of those lone pairs and which of these carbonyls is actually going to engage in a reaction. And when that happens we're going to fall back on orbitals to decide which of these is going to be most reactive. Okay, so again what we're going to be talking about is this overlap of molecular orbitals. That overlap of molecular orbitals, the filled, unfilled overlap leads to the formation of new bonds and consequence breakage of others. Okay, so when this lone pair overlaps with the anti-bonding orbital of the carbonyl, the pi star orbital of carbonyl, the result is a new covalent bond directed by this first arrow. Now on the other hand we know that this carbon can't have more than five bonds to it or can't have more than four bonds to it and so five bonds would be disallowed. And so for this reason in concert with this formation of a new bond there's breakage of the pi bond between carbon and oxygen of this ketone. This is good news, right? This totally makes sense because what we're doing is we're populating this anti-bonding orbital and in doing so we're making the orbit, we're making that pi bond break, right? If you put electrons into an anti-bonding orbital what does it do? The bond breaks hence the name anti-bonding, okay? So this overlap to me is kind of like the peanut butter and jelly of organic chemistry. We're always going to be talking about a HOMO, a highest occupied molecular orbital overlapping with the lowest unoccupied molecular orbital and in the same way that peanut butter and jelly taste so good together, orbital overlap works so well. It is so complementary in terms of reactivity. Okay, so let's get back to our challenge again. The challenge is in biology we often times have many different possible reactivities. We often times have many different possible reaction mechanisms that we can draw. Despite that plethora of possibilities what we will see is that there is often times one and only one true mechanism, dominant mechanism for a particular set of molecules and again this is good news, okay? So for example let me show you sort of an easy case where we're going to be looking at reactions to possible mechanisms, one that makes chemical sense and one that does not. And so by doing this we can start to eliminate a lot of different examples. Okay, so here is a clash of two possible wills. In this reaction one possible mechanism has the lone pair attacking the antibonding orbital of the carbonyl and going through the transition state that's depicted down here. This reaction is an addition elimination reaction. It goes through this transition state in addition and then in the elimination reaction the chloride is eliminated giving us a substitution of nucleophile in place of chlorine, okay? It makes sense, fundamental reaction. A different type of reaction mechanism might look like this where the nucleophile directly displaces the chloride. In doing so the nucleophile, lone pair on the nucleophile is populating the sigma star antibonding orbital of the bond between the carbon and the chlorine, okay? So two possible mechanisms. One involves the pi star orbitals, this one involves the sigma star orbitals. And I guess I first blush these two reaction mechanisms might both look totally legitimate and both equally valid. The problem is they aren't. That we can actually readily eliminate the reaction mechanism on the right that is governed by the SN2 reaction. Instead what we can do is actually very quickly decide that only the addition elimination reaction will work. So returning to this possible, this clash of two wills, why don't we look at a transition state or a reaction coordinate diagram for the two possibilities which I think tells us which possibility is correct and which one is wrong. Okay, so this reaction coordinate diagram is depicted over here. So in one reaction that I showed on the previous slide, the mechanism is an SN2 reaction and on the right, this is the addition elimination reaction, okay? So in this, I acknowledge this is a complicated diagram, bear with me. So over here these are the starting materials, this is the acid chloride, this is the nucleophile, and again if this reacts through an SN2 reaction you will get this left reaction coordinate and if it reacts through an addition elimination reaction you get the right coordinate. Now two possibilities, small little hill, big hill. Which of these two is preferred? Small hill, big hill. All right, now let's just imagine you're an electron, you have to decide which one would you prefer. Would you prefer tramping up the very, you know, steep ski slope or would you prefer the much shorter hill? Okay, I will tell you also that electrons are lazy, that they do not expend any extra energy than they need and in doing so they are going to prefer very strongly the tiny little hill or the much smaller hill of the addition elimination reaction to the SN2 reaction. Okay, this makes sense, that's the way electrons live their lives. So what this tells us is that yes, there are two possible reaction mechanisms for this reaction, yet only one is actually correct. The only one that's correct is this one on the right, the addition elimination reaction, one on the left has to go through a much higher energy SN2 reaction. Okay, now I'm going to explain in greater detail in a moment why it is that the one on the right is preferred than the one on the left. Okay, to understand that I need to tell you about three possible components of orbital overlap. So the energy in this interaction is proportional to three components, okay, and let me go back. Recall over here then in reaction coordinate diagrams, the y-axis depicts energy where a higher number up here indicates higher in energy and again electrons being lazy prefer lower energy, okay. So that again is why the smaller hill is preferred to the bigger hill in terms of which side to go on, left side or right side. Okay, now this energy is proportional to three components. Component number one are charge-charge interactions, okay. So if these molecules happen to have plus charges and minus charges then that will have some interactions, some Coulombic interaction. In addition, if the molecules have a repulsive interaction with each other that will also contribute energy as well, okay. So charge-charge interactions, these are governed by the social convention like opposites attract, okay. So in social circles opposites attract I think is commonly accepted, it works as a formula for dating websites, it also works reasonably well as a formula for molecules as well. So happily social conventions mirror atomic formulas, okay. So charge interactions are one possibility. If I go back you could see that we don't really have any charge interactions operative in this mechanism as depicted here. Nuclear file is neutral, acid chloride also neutral. Charge interactions off the table. Second term repulsive interactions. So this would be if the molecules have some sort of steric hindrance that prevents them from overlapping with each other. And this is a really important component in terms of preventing molecules from interacting. It's used extensively in biology, it's used extensively in enzymatic catalysis. Again, over here that doesn't seem to be a possibility, right. The nucleophile has a wide open lone pair, the acid chloride similarly wide open, it has just a methyl group on this attached to it. So there's really no repulsive interactions that are operative here. And by the way, just to remind you, the repulsive term of this equation is the term that allows this hammer to get pound, to pound in the nail in this wall, okay. So these repulsive interactions, that's basically the Pauli exclusion principle, that means that electrons cannot occupy, that more than two electrons cannot occupy the same molecular orbital, okay. And so for this reason, hammer starts pounding on nail, nail goes into the wall to get away from hammer, okay. They don't, you know, suddenly merge with each other and magically start to create some sort of hybrid material, okay. Things don't happen that way. Okay, so repulsive interactions is clearly important, not so operative in this reaction, right. These two can snuggle up as close as they want, there's no, you know, prevention of that by, you know, steric shrubbery. Last one, attractive interactions. This third term I would describe as mysterious, right. This is not the term that we're used to talking about. This attractive interaction is nothing more than the field, unfilled overlap that I've been talking to you about today, okay. So here reduced down to its terms is a different representation of the same equation one from up above. In other words, the reaction energy for a particular set of interactions is proportional to Coulomb's law which governs charge-charge interactions plus the steric terms minus the field, unfilled orbital overlap, okay. And it's this third term over here that governs whether or not the molecules actually get to form and to break bonds. Okay, now here's the deal. The problem is that these three terms interact in a complicated way, okay. That if we go out and just, you know, start applying this equation to every possible social situation we find ourselves in, we're going to have trouble, okay. And I guess the most obvious thing is, you know, the opposite-as-attract rule only carries you so far, okay. Before you get married to your, you know, snuggly significant someone, it might be a good idea to find out whether that opposite-as-attracting carries over to, you know, I don't know, temperature of the bedroom or something like that, okay. So for this reason, this equation over here is a good deal and more complicated. Why don't we take a look? Okay, so opposite-as-attract, here's an example. We have hydroxide. We have nitrogen. If they attract so much, negatively charged hydroxide, positively charged nitrogen, our first instinct might be to try to attempt to draw a bond, an arrow between the lone pair on this hydroxide and the positive charge on the nitrogen. That would be wrong, wrong, and wrong. It would be totally wrong. And the problem is that this is wrong at every level. The result here would be a fifth bond to nitrogen. And nitrogen, being in the first row of the periodic table, cannot possibly handle such a large number of bonds. Remember, first row of the periodic table, carbon, nitrogen, oxygen cannot handle more than eight electrons around the atom, okay. That's four bonds. Five bonds, totally wrong, okay. Another big problem with this that infuriates me is notice the arrow starting on the negative charge and moving to the positive. Okay, that's wrong too. Because again, arrows are supposed to depict overlap of orbitals. I'm getting a little ahead of myself, okay. Here's the correct way to do it. The correct way to do this is to show hydroxide attacking the carbon and displacing the positively charged nitrogen in an SN2 reaction. Okay, so this office as a track business only carries us so far. Okay, so that's our first problem. Is that this is, that this really, this charge-charge interactions is very rare to provide an operative mechanism in organic chemistry and for that matter in bioorganic chemistry. Really, charge-charge interactions are very important for non-covalent bonding. Not so important for covalent bonding. And in fact, potentially very, very misleading. So, cautionary note. Instead, we need to turn to molecular orbital theory. Molecular orbital theory can explain the otherwise unexplained. And I'll give you one example of this before we go back to our canonical example that I showed you earlier. Okay, so for example, this methyl ester has a preference for the synconfirmation versus the anti-confirmation. And to a first approximation, this should strike you as rather odd, right? Because in this case over here, the methyl group is as far as can be away from the lone pairs that populate the oxygen, right? Those lone pairs that stick up like Mickey Mouse ears above the oxygen. And so, this anti-confirmation should to a first approximation appear to be the preferred orientation. But you know, when we look closely at this and we can using various spectroscopic techniques, what we find is actually the dominant confirmation is the synconfirmation. And you can start to understand this if you think about overlap of molecular orbitals. Okay, here is, you know, here's again, that's the synconfirmation should appear to have some steric clash. But again, molecular orbitals explain why it is that it doesn't prefer that. Okay, so I keep talking about molecular orbitals. I think it's time for us to dive right in and start to dissect them and look at them in greater detail. And let's get started. So, in molecular orbital theory, we're going to be talking about atomic orbitals. So, the atoms of a molecule each have a atomic orbital associated with it. Okay, so the nitrogen has some atomic orbital. The oxygen, the carbon, even the hydrogen has some little tiny molecular or some little tiny atomic orbital associated with it. Those atomic orbitals are found in SPD and F orbitals. Okay, that's where the electrons hang out. They hang out in shells or orbitals. I prefer the word orbital, which describe their orbit as they orbit around the nucleus of the atom. Okay, and remember, those electrons, that's the business end of the atom. That's what endows it with functionality. That's what makes molecules the way they are. Okay, now, here's the thing. Oftentimes, these electrons are not simply in either an S orbital or a P orbital. Instead, they typically hybridize into hybrids of SP orbitals. Okay, and we're used to this concept. These hybrid atomic orbitals are given the names SP3, SP2, and SP, here's the important part. Okay, so this is review. I know that you've seen these hybrid atomic orbitals before. This is the part that matters to us as chemical biologists and biogetic chemists. The S character of these hybrid atomic orbitals determines its stability. This totally makes sense. Okay, so an S orbital is a sphere where in the very center, the sphere is the nucleus of the atom. Nucleus is positively charged. The sphere defines the orbit of the electrons. And in this sphere, those electrons can cozy up as close as possible to the positively charged nucleus. Okay, so this is a great example of opposite attract and that attraction equals stability. On the other hand, a P orbital, as depicted up here, has the nucleus at a node between the two lobes of the orbital. Okay, so the nucleus is right here in the center again, but that happens to be a zone of exclusion where the electrons are not allowed to exist. Rather, the electrons in this orbital are hanging around either in this node up here or this other node down here. They're not in or sorry, lobe, this lobe up here and or this other lobe up here, they are not allowed to get up too close to the positively charged nucleus. And so for this reason, the S character of a hybrid atomic orbital determines the stability of that orbital, okay, of the electrons in that orbital. Conversely, the P character defines the instability. It defines how reactive and how nucleophilic those electrons in that hybrid atomic orbital really are, okay. That's kind of like, you know, defining how unhappy the electrons are, okay. Happy electrons are found in these spherical S orbitals. Unhappy electrons are found in pure orbitals. And what happens when electrons are in unhappy situations? Well, they will move. They will do everything they can to find more stable orbitals for themselves, okay. So these are the atomic orbitals, specifically the hybrid atomic orbitals over here. And P character conveys, confers reactivity and basicity. So for example, if we look at a series of lone pairs found on a carbon, what we find is that the higher the P character, the more reactive that resultant lone pair will be, okay. And this can be dramatically illustrated in terms of basicity, okay. So here's a lone pair in an SP3 hybridized orbital. Its PK is 50. Compare that against a lone pair in an SP or an SP2 hybridized orbital. The difference here is truly dramatic, okay. So the PKA is only 41 in the case of the SP2 hybridized orbital. And then it's way down at 24 down in the SP hybridized orbital. This is an enormous difference, okay. Remember, PKA's are a log scale. So in other words, this guy up here is 10 to the 26 times more reactive than this guy down here. And by more reactive, I mean how avidly it's going to be reaching out and ripping hydrogens, ripping protons off of its neighbors, okay. And this tells us almost immediately that for example, you know, organometallic compounds are going to be extremely avid at grabbing protons. To the point where they're nearly, they're incredibly flammable and nearly explosive. Okay. Now, this 10 to the 26 times again is huge, right. That's a 1 followed by 26 zeros. It's such a large number. It's hard actually for us to even imagine it. Okay. So enormous differences determined by this P character, S character. I hope by now everyone who's listening to this and everyone in my class can explain why it is that these guys are so much more reactive than these guys. And it should make sense just from geometric considerations as depicted here. Now, these hybrid atomic orbitals recombine into molecular orbitals in molecules. Okay. So the hybrid atomic orbitals only carry us so far. More often, these hybrid atomic orbitals are shared between atoms and that sharing is what gives us bonds. Okay. Now, these molecular orbitals are given the names sigma, pi and n. Okay. So these hybrid atomic orbitals form bonds with other atoms and that yields molecular orbitals. The energy of these molecular orbitals is defined very specifically. And there's no way around this. I basically just have to tell you I'd like you to memorize this chart on this slide. Okay. So please memorize the order of this reactivity where sigma molecular orbitals are lowest in energy, pi are higher in energy, n are even higher. Okay. So these are the filled molecular orbitals. These are molecular orbitals that have electrons in them. And these electrons are depicted by the up arrows and the down arrows. Okay. That's a convention that you've seen before. Okay. Now, sigma makes sense. Sigma are the molecular orbitals that define single bonds. S for single, S for sigma. Pi, this defines double bonds and that's convenient, right? Pi looks kind of like a double bond. N, the electrons in n orbitals are the lone pairs that are hanging out around the atoms. Okay. So when the n orbitals are present, those are going to be the highest occupied molecular orbitals. So almost immediately that clues us in that we need to pay attention to those lone pairs. Okay. What about the unfilled molecular orbitals that we're going to encounter in chemical biology? These will be found in three molecular orbitals. And again, I need to ask you to memorize these, the order of the energies. Okay. The lowest in energy are p orbitals. P orbitals are exactly what I showed you a couple of slides ago. Okay. That's them, these over here. This is what a p orbital looks like. It has a lobe and another lobe down here. P orbitals we find when we look at carbocations. Okay. The empty hole that is the carbocation is a p orbital. Okay. The other electrons that surrounds the carbon that surround the carbon of the carbocation, those other electrons are in an sp2 hybridized atomic orbital. So the remaining empty atomic orbital is a p orbital. Okay. So most of the time we don't really have carbocations. The reason for this is that they are extremely reactive, being so low in energy. And so for this reason in biology, we very, very rarely find carbocations. And in wheat gate, I'm going to show you an exception to this. But for now, let's keep in mind that we're just not going to see these very much. And again, the reason is biology takes place in water and carbocations react avidly with water. Pi star. This is the anti-bonding complement partner to the pi orbital. And sigma star is the anti-bonding complement to the sigma orbital. And again, we're seeing this relationship where pi star is lower in energy than sigma star. Okay. So here's what I need to tell you. Good news, you don't have to worry about where all those electrons are in a molecule. And this is really fabulous news. Okay. If you just stop, take a moment, take a deep breath, pause and appreciate this. Because the molecules we talk about when we talk about biology are fiendishly, fiendishly complicated. Okay. This goes back to the business that I talked about earlier of the hundreds if not thousands of pi bonds, the thousands upon thousands of lone pairs. The good news is we get to simplify all of that complexity down to just worrying about the frontier orbitals. Okay. So in other words, we only have to worry about the frontier highest occupied molecular orbital and the frontier lowest unoccupied molecular orbital. Okay. In other words, all we have to do is focus in on the highest occupied lone pair, highest occupied molecular orbital that has a lone pair in this N orbital or and also the lowest unoccupied molecular orbital over here. So in other words, if there's an available p orbital, it's going to react first. Okay. If there's a carbocation, everything else will come to a halt and carbocation gets us stay in the sun. It gets to dance around. Okay. If there is a lone pair, lone pair will be the dominant reactivity. Okay. This is goodness. Okay. It simplifies everything. We just have to look for the highest energy HOMO and the lowest energy LUMO. Okay. What does this mean? What this means is that this highest occupied HOMO is the field frontier orbital and this is the orbital from whence all nucleophilicity, all basicity springs forth. Okay. And I apologize for the kind of antiquated English but really that's how I think about things. Okay. This is the orbital that is the business end of this complicated molecule. It doesn't matter how many possible lone pairs it has. It doesn't matter how many different possible configurations it has. All that really matters is its highest high and its lowest low. Okay. Again, this is majorly important because it simplifies things for us. Okay. So this HOMO, highest occupied molecular orbital, is the field frontier orbital and it's a nucleophile in reactivity. Okay. Now, the intrinsic nucleophilicity is governed by the energies of these molecular orbitals where again, the highest in energy is the end bonding, the non-bonding molecular orbital that has lone pair and the lowest in energy are the electrons of the sigma or single bonds. Okay. To reduce it down to simplest terms, we're never going to be really seeing reactions that start with sigma bonds. Okay. It just doesn't happen in chemical biology. Most of our reactions are going to spring forth, are going to spring from lone pairs that are in non-bonding orbitals, occasionally electrons and pi bonds, but really, we don't really have to worry about the electrons in sigma bonds. We know they're there. You know they're there. They're there, but we don't have to get wrapped up in them. And this again is good news because there's a huge number of electrons in these complicated molecules that have, you know, thousands upon thousands of atoms. Okay. What about the lowest energy unoccupied molecular orbital or the LUMO? This is the unfilled frontier orbital and the lowest energy unoccupied is the most available molecular orbital. This is the molecular orbital that's going to be the center of attention for reactivity. Okay. And again, where you have these complex molecules, this is kind of like the funnel to which all reactivity zooms towards. Okay. Now again, we need to know this order over here where P is lower in energy than pi star, which in turn is lower in energy than sigma star. So if we're given a choice of different sites for nucleophile to attack, nucleophile will choose every time to attack the P orbital because it's lowest in energy. And again, we see P orbitals when we look at carbocation. If there are no carbocations present, which again, as I said earlier, is exceptionally common because carbocations are very, very rare in biology where biology takes place in water. So if there are no carbocations present, we can eliminate this one and we start focusing on pi star orbitals. If there are pi star orbitals that are available for reaction, then it's likely that this will be dominant, the dominant reaction. Occasionally, you come across a molecule that doesn't have a pi star in which case then you might have an attack on a sigma star. This is rare, okay, especially as depicted here. This is utterly wrong as depicted in the slide. I find it offensive, but I'm stuck with it. This might happen, for example, if there was a sulfur here, then you might have this sort of reaction taking place. For now, let's keep in mind that we're going to probably be having reactions, our electrophiles and our reactions are going to be molecular orbitals consisting of antibonding pi bonds. Okay, so it's the pi star or antibonding pi molecular orbital. Okay, I want to switch gears. If you have any questions about molecular orbitals or hybrid atomic orbitals, don't hesitate to shoot me an email or talk to the TAs, come to my office hours, et cetera. We now have to talk, I told you about what you do to decide what the reaction mechanism is. We now have to talk about how to actually tell me what that reaction mechanism is. Okay, so oftentimes in chemistry, we have some notion that molecules are reacting, but we need a clear way of communicating that reactivity. So organic chemists have developed this wonderful vocabulary using arrows, and so let's take a closer look at what those arrows are. The arrows are going to be starting from highest energy occupied molecular orbitals, the HOMOs, and they're going to be ending on the lowest energy unoccupied molecular orbital, the LUMO. Okay, this is a golden rule. This is a rule that always applies. Okay, your arrows start on orbitals, they end on orbitals. They start on HOMOs, they end on LUMOs. And again, they're always going to start on the highest HOMO and the lowest LUMO, and they'll end on the lowest LUMO. Okay, so again, that lowest LUMO is the lowest energy unoccupied molecular orbital. That's the most available, and in turn, that's the most reactive. Now the problem is, again, we oftentimes have many HOMOs, we have many LUMOs. What's an organic chemist to do? What's a student supposed to do? So when in doubt, refer to this idea of looking for the highest HOMO and the lowest LUMO. I can simplify it, cut it down to make it even easier for you. So most of the time, just start, put your pen on a lone pair and start pushing electrons to end on the best electrophile. It's that easy. If you're in doubt, you're stuck there at your desk, during an exam, you don't know where to start, put the pen on the lone pair and just start drawing. Okay, end the arrow on the best electrophile, nine times out of 10, 99 times out of 100, maybe even more, you'll get the answer right just by doing that. Okay, so I need to talk to you about some rules. We have rules because this is a language, and in order for us to be clear in what it is that we're communicating, we need to have some conventions. Okay, the conventions we're going to follow in this course are the following. And by the way, before I present these conventions, I should tell you, I'm a stickler for these rules. Okay, if you give me something that doesn't follow these three rules, chances are, even if it's correct conceptually, it won't get full credit. Okay, and the reason for this is it's kind of like turning in an essay that has incorrect grammar to your English class or something like that. What's your English professor going to do? Is it give you an A for great ideas and a C for bad English? No, your professor is probably going to give you a C overall because the goal is to communicate effectively. Okay, so in the same way, when we speak using the language of arrows, we have to follow these conventions because this is what convinces us that we know what we're talking about. Okay, so the conventions are arrows never indicate the motion of atoms. And this is one that, if we stop to think about it, actually, it's kind of profound. I think that all of us are used to having arrows showing, you know, football player who's over here, let's say the quarterback, moves back here and then gets behind this guy. And then another arrow shows this guy moving forward. Those are the kind of arrows that you've been drawing, you know, I guess since you were able to draw arrows. Okay, which is to show motion, to show the fourth dimension, really, to show some element of time. Organic chemistry, we don't use arrows in that way. Rather, we're using arrows to depict overlap of orbitals. We're not depicting it in terms of time. We're depicting it in terms of thermodynamics, not kinetics. In other words, we're depicting an overlap of orbitals that's allowed. Okay, so arrows do not indicate the motion of atoms. Yes, it's true, the atoms must cozy up to each other. That's kind of understood, that's lurking in the background, but that's not really what arrows show you. Arrows never start or end on charge. Since these arrows are depicting the interaction of field and unfilled molecular orbitals, charge is irrelevant. Okay, charge, formal charge is one of those nice conventions that makes Lewis structures so much easier to understand. Yet the charge itself does not show you where the electrons are. It doesn't show you anything about the molecular orbital. And so drawing an arrow from one formal charge to another is worthless. Okay, so again, arrows never start or end on charge. Error instead, here's the one that really embodies everything. Arrows begin with lone pairs, with pi bonds or sigma bonds, and end on unfilled orbitals. Can I want you to be really precise about how you draw these things? That precision indicates that you understand what is going on. Okay, and drawing your arrows to precisely end where it is that that pi star orbital should be, you're telling me something. You're telling me a story. You're telling me where it is that those electrons are going to appear. And in doing that, you're describing to me the reaction that's taking place. Okay, so I need to have all of these things taken care of when it comes time to, you know, for exams and things like that. Okay, make sense? Okay, so why don't we take a look at an example? Okay, so an example is this very simple problem. Okay, and the problem is we're going to have a lone pair on the nitrogen over here, and we're going to do a simple substitute in nucleophilic attack. Substitute is nucleophilic reaction that substitutes for chlorine this lone pair on nitrogen, or this nitrogen, this amine. Everything looks good. This is a reaction very similar to the one I showed you at the very beginning of the class. Avoid the poisoned candied apple of simplicity. Rather, fall back on homos and lumos. Let me show you what I mean. Okay, and to do that, I need to roll up the screen. Here's what I mean by avoiding that tempting, poisoning apple of simplicity. Okay, so in this example. In this example, there's a lone pair on the nitrogen. All right. Okay, so here's our reaction again. And the simple, and I would call it even simple-minded, possibility is for the nitrogen to simply displace the chloride as an SN2 reaction. All right, so the simple case, we have this reaction mechanism here. Okay, this case I would call an SN2 reaction, right, substitute a nucleophilic 2 reaction, and this will give us this guy over here. And then this can lose a proton. Okay, so I'll show this base. The base, for example, could be chloride. Okay, and this base can deprotonate this nitrogen giving us the product. Okay, now what's wrong with this? This is totally, totally wrong and completely unacceptable. It appalls me. It's upsetting to me. What's so wrong about this and so appalling is the fact that we're attacking a sigma star orbital. Okay, this is an attack on this sigma star orbital over here. Right, when we have several perfectly good pi star orbitals that are available, right, so sigma star is not the lowest energy unoccupied molecular orbital. Far from it, there's plenty of pi star molecular orbitals that are going to be lower in energy. Why don't we explore those as a possible reaction mechanism? Okay, so a different mechanism would start. Okay, so I'll draw some lines through here. A different and more correct mechanism. Well, this time have the lone pair in that N non-bonding orbital of nitrogen attacking the pi star orbital of this alpha beta unsaturated carbonyl. Let me show you. Okay, so here's the lone pair. It's now going to attack the pi star molecular orbital. Electrons bounce, bounce all the way to the electronegative oxygen. Okay, so again, this is attack not on a sigma star, but it's attack on a pi star molecular orbital, an antibonding orbital. Okay, now why is this so much better? This is better because the pi star is lower in energy than sigma star, and so for this reason, this addition elimination reaction is greatly preferred. Okay, this is actually the operative mechanism for this reaction. Okay, you can continue on. I encourage you to do so, and in the end, you get this product over here. Okay, there will be many times in this class, I'll kind of set stuff up for you. I'm going to then let you finish it off on your own. I apologize for that. This is upper division organic chemistry, upper division chemical biology. We're at that point where I don't have to show you every step. There are fundamental steps that I want you to know. There are fundamental steps that I expect you to know, but I'm not going to show them to you during every lecture. Okay, rather, I want you to go home. I want you to fill them in in your notes. I want to make sure that you know them, because on the exam, I will ask you to show me those steps. But on the other hand, I'm not going to dwell on them today. Okay, I just don't have enough time to talk about them in a class of this length. Okay, so the lesson from this is clear. The lesson is don't be tempted by simplicity. Instead, look at the overlap of orbitals. Which one is a better overlap? Overlap with the sigma star or overlap with the pi star. Pi star is lower in energy and it's therefore greatly preferred. Okay, let's move on. I have to lower this again. All right. Now, the other thing to make this work is that you also have to make sure that you're drawing the correct Lewis acid structure. For the most part, I don't think this is going to be a problem in Chem 128, but it is important that you set things up correctly. Okay, if you are drawing, for example, five bonds to nitrogen, a lot of the reactivity of this an oxide is not going to be apparent, because this is totally wrong. Okay, similarly, you know, in terms of the number of bonds that you draw, this helps you in terms of keeping track of things. For that matter, it's also essential for you to depict correctly the formal charge. Oh, thanks. Sorry. This formal charge helps to guide us. For example, the negative charge on this carbon over here, that should look kind of funny to you, right? Carbions, that should look funny. That should be extremely reactive. So formal charge helps to guide us in terms of drawing these correct mechanisms. I'll have a lot more to say about hydrogen bonds. I don't really care about data bonds. We won't see them in this class. Let's not get into it today. I'll talk to you more about hydrogen bonds in a moment. Okay, so arrows start with bonds or lone pairs. And here are some correct depictions of arrows. Okay, so in this case, where we're showing a bromide leaving for an elimination reaction, the bromide takes off. And notice that the arrow is starting at the carbon bromine bond. Okay, in other words, the electrons in that carbon bromine bond decide to step out the door and leave with their friend the bromine, giving us a bromide ion. Okay, so here's electrons that are starting with another bond, in this case a pi bond. Here they are starting with a sigma bond. Here they are starting with a pi bond. And here they are starting with a non-bonding lone pair. Okay, all three of these cases are correct. Contrast that with these cases over here, where I'm showing you arrows starting on charges. This again is deeply appalling and totally wrong. So arrows do not start on atoms. So for example, like this or like that, instead we want to draw them starting on the bonds themselves. This should make sense, right? Arrows are trying to depict the overlap of orbitals. They need to start where the electrons are. The electrons are found in these bonds. Electrons are not found in this negative charge. They're not really found around this bromide. Instead, we're talking about the electrons that are shared between bromine and carbon. Those are the electrons that matter. Okay, now that's where they should start. Let's talk about where they should end. So arrows need to end on atoms and bonds. Okay, so here's a lone pair attacking a proton. It's ending directly on that proton. Okay, so here's it ending on atoms, the proton. Here it is ending on the carbon of a carbocation. Here it is ending on the hydrogen or the proton during a beta elimination step. Okay, so atoms never terminate in empty space. So for example, when bromide is stepping out the door, the electrons don't simply hop out and then go over to open into empty space. For that matter, this arrow would be wrong if it started at this carbon bromine and then had the electrons just going off into empty space. That's not correct. The electrons don't get to walk off into empty space. That would be extremely high in energy and extremely repellent. Rather, the electrons get to end on this bromine atom giving us bromide. Okay, so arrows need to end on atoms. They will depict again this overlap of filled and unfilled orbitals. Okay, hydrogen for that matter is always attached to something. Okay, I'm starting to get down to my pet peeves, but this is one of those pet peeves that does matter. Okay, hydrogen is not some atom that kind of like is floating around next to the molecule. Rather, hydrogen is directly attached to some particular atom. And this matters a great deal because where it's attached will determine to large extent whether or not it's going to be acting as an acidic proton or perhaps not acidic at all. Okay, so these terms, proton, hydride and hydrogen atoms are three different depictions of the hydrogen atom and they have three different meanings. They have totally different meanings. Okay, so H plus is the proton, H minus is the hydride and H radical is the hydrogen atom. They really, we don't find them just kind of floating around like this in the chemistry that takes place inside cells. Okay, protons aren't just floating around inside the cell. Rather, they are always attached to something. Maybe they're attached to a water molecule to give you a hydronium ion, but they're not just kind of hanging out, they're doing something. Okay, hydrogens do not like being by themselves. Okay, so in other words, what you want to avoid is showing proton, just kind of hanging out in space, waiting around for some lone pair electrons to attack it. That's not what happens. Hydrogen doesn't get to do that. Okay, furthermore, hydrogen radical also doesn't really occur, nor does hydride really occur. Okay, rather in solution chemistry, we find species that can either donate a proton, donate a hydrogen radical, or donate a hydride. Okay, so what I propose you do is instead of depicting H plus as a reagent, instead depict H plus as catalytic quote unquote H plus. Those quote unquotes are going to tell us that yes, we mean H plus, but what we really mean is we mean H plus that's been picked up and delivered by some other species. In this case, that might mean attached to this methyl, this methanol molecule that's going to be its delivery character, or you can even write catalytic HA, where in this case it's HA that's attached to the conjugate base that's going to be delivering the proton. Okay, any one of those is fine. It is important for you, however, to follow these conventions because they communicate to me that you know what molecular orbitals are being overlapped and it tells me whether or not, excuse me, you understand the chemistry that's involved with these reactions. Okay, one second. Okay, I want to conclude today's lecture by discussing with you one other element of hydrogen atoms and that's the hydrogen bond. Everyone needs a favorite bond. My favorite bond is of course the great Sean Connery, but today I'm going to be talking to you about a second favorite which is the hydrogen bond. Okay, the hydrogen bond governs so much of biology that it is essential for us to really get to understand it correctly. Okay, so hydrogen bonds are actually largely a coulombic interaction. They describe the sharing of a hydrogen atom between two partners. One partner is going to be our hydrogen bond donor and a second partner will be a hydrogen bond acceptor and this hydrogen bond will be depicted by this dashed line. Okay, so this dashed line is going to be our convention for hydrogen bond. We're going to use this a lot. Okay, hydrogen bonds for example hold together the two strands of DNA. They make molecular recognition possible. The non-covalence interactions between molecules. So hydrogen bonds absolutely essential to chemical biology. However, it turns out that the energy of the hydrogen bond is very sensitive to the environment and the geometry that's involved with the sharing of that hydrogen. The geometry in this case that I'm showing you over here is of a perfectly linear hydrogen bond which is the best possible example. Okay, so in this case the lone pair on this water, on this oxygen of water down here is perfectly positioned to share this hydrogen bond, this hydrogen of this water up here and the oxygen, hydrogen and oxygen are lined up as a straight line. Oftentimes that is in the case. Okay, so for example we can look at hydrogen bonds that are found in the active sites of enzymes and we find that instead of having this neat straight line we get a bendy line instead. That bendy line, that bent hydrogen bond much, much weaker. Okay, so this is kind of the optimal geometry, optimal hydrogen bond acceptor which is a lone pair, optimal hydrogen bond donor over here and this is kind of, this will be our canonical hydrogen bond. Now here's one of the problems. One of the problems amongst others is curved arrows. Curved arrows confound us when it comes time to talk about hydrogen bonds. The reason is curved arrows and hydrogen bonds simply don't mix. Curved arrows depict the overlap of filled and unfilled molecular orbitals whereas hydrogen bonds are showing us a partnership of sorts between a donor and an acceptor and there really isn't this sort of overlap that leads to covalent bond in the case of hydrogen bond and this becomes tremendously confounding. So for example if you want to just show transfer of the proton on this nitrogen atom to the lone pair of the oxygen you might be tempted to simply draw a hydrogen bond in here and that would be utterly incorrect because this hydrogen bond is basically saying the hydrogen is somewhere between here and there, somewhere in the middle, somewhere on the sides whereas over here in the case of the curly arrows you're saying no it's going to pick this up, it's going to pick up the proton wholesale, hang onto it for a while and give you a positive charge on oxygen. These are two very different depictions. So what we were going to be doing in this class is showing those hydrogen transfers as an explicit step. Okay? So hydrogen bonds are going to be useful for us for talking about non-covalent interactions but not useful at all for talking about covalent interactions, the reactivity that I've been talking to about today. Turns out that hydrogen bonds, that proton transfers of the sort that I showed in the previous slide, these sorts of proton transfers over here are extraordinarily fast. Okay? They're oftentimes diffusion controlled. In other words, they hit the speed limit of reactivity for reactions that take place in solution. That kind of speed limit and that kind of proton transfer ability is actually tremendously useful. Okay? So this is a diffusion controlled reactions. So proton transfers to and from heterodromes very, very fast. Proton transfers for that matter in the same way that hydrogen bonds require linear geometry, proton transfers also require linear geometries. And I can tell you that almost immediately this is going to annoy you. This takes away one of the conventions that you mislearned back in sophomore organic chemistry. I know it was cool back then to show a proton transfer as a neighboring oxygen say picking up a proton over here on the nitrogen. And you have this completely ridiculous and totally crazy four-membered ring transition state. It galls me to even say this. Can you imagine four atoms getting together to form, you know, some sort of very strained four-atom ring transition state? It's totally insanity. Even more insane, notice that the geometry between oxygen, hydrogen and nitrogen is not perfectly linear. Instead, it's bent at a 90-degree angle. And these kind of proton transfers do not happen this way. Instead, they exclusively prefer a linear geometry. So only linear geometries are going to count when we talk about proton transfers. And so for this reason, I need to take this particular step out of your vocabulary. Okay, now it was acceptable back in sophomore organic chemistry. It's no longer acceptable. So acids and bases are required to catalyze proton transfers into tomorizations. So instead of showing it like this, a much better alternative would not even alternative, the correct way to depict this would be to show the oxygen picking up a proton from a catalytic acid. And then in turn, the conjugate base of this acid acts as a base to deprotonate the neighboring positively charged nitrogen, the ammonium ion. Okay, so in all cases, we're going to see that acids and bases are required to catalyze these proton transfers. And that turns out to be a general rule. And the good news for us in terms of chemical biology is that oftentimes or in all times, we can find abundant numbers of different molecules that are all too willing to volunteer to be those catalytic acids and bases. And the obvious examples would be, for example, water. Water can be a hydronium ion to act as a proton donor. Water can also act as a base to accept protons and become a hydronium ion. And since all biology takes place in water, feel free to use water as that catalytic acid and that catalytic base. Okay, we've come quite a ways. I've shown you proton transfers. I've shown you how to draw arrows. We've talked about the rules that govern these electron transfers in terms of filled, unfilled overlap of molecular orbitals. We're now going to transition to looking at some examples of this and I'm going to show you this on Thursday when we talk about the molecules found on Earth that compose all living things. So why don't we stop here? When we come back next time, I'll be showing you examples that apply the principles that we've talked about today. Thank you very much.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Instructor: Gregory Weiss, Ph.D. Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:07:01 What is Life? 0:09:07 Arrows Depict the Overlap of Molecular Orbitals 0:19:57 The Three Components of Orbital Overlap 0:24:27 Charge-Charge or Coulombic Effects 0:26:31 Molecular Orbital Theory Explains the Otherwise Unexplained 0:28:05 Combining Atomic Orbitals 0:39:56 Highest Occupied Molecular Orbital 0:41:57 Lowest Unoccupied Molecular Orbitals (LUMOs) 0:44:08 Anatomy of an Arrow 0:46:38 3 Rules for Mechanistic Arrow-Pushing 1:01:27 H is Always Attached to Something 1:04:48 Hydrogen Bonds 1:08:58 Proton Transfers
10.5446/18861 (DOI)
Okay, welcome back. Quick sound check. Everything okay? Great. Thank you. Welcome back. Today we're going to be finishing up the topic that we were talking about last time. Last time we were talking about combinatorial approaches in chemistry. And then we'll talk a little bit more about combinatorial approaches in biology. And I'll show you a couple of examples of this. All right. Okay. Huh. That's interesting. All right. Okay. So, okay. So again, we're here. We just completed our survey of biomolecules. I'm going to complete the topic of making combinations of biomolecules. And then we'll talk about tools for chemical biology. And this is really important because these are the tools that you're going to be using when you write your proposals. So I'm glad you're all here today because you absolutely need to hear this to be able to write a good chemical biology proposal. Which recall last time I told you was going to substitute for the final exam in this class. There is no final exam in this class. We will not have a final. Instead, on the very last day of class, you will hand me a 10-page or so proposal, written proposal with figures. And it will be an original idea. Something that no one on the planet has thought of before. You will be the first. And it's going to be really fun because it's really great to come up with creative ideas. And that's really the ultimate goal of science. Science is really a creative enterprise. Our goals are to invent new concepts, to tell people new visions of the universe. And to do this, we have to somehow invent these new experiments to do. Okay. So I'm going to be talking to you today about the tools in your toolkit that you're going to be using to do this assignment. Okay. I already talked about these announcements. I'm skipping some stuff. Oh, office hours. I had office hours yesterday that got derailed by a student emergency. And I know at least one of you sent me an email about that. I apologize. I will have office hours today. And in addition, I sent an email back to that student. So I apologize if you came by yesterday. There was a student health emergency that absolutely needed my attention. And so I had to close my door to deal with that. Okay. So apologies there. Other office hours. Tomorrow, Miriam will have her office hour on Friday. And I'm hoping Krithika will be back next week. And I'll introduce you to our Chilap office hours next Tuesday. Okay. So. All right. Any questions about any of the announcements, things like that, things that we talked about last time? Questions about the course structure? Oh, I got an email from someone. And I apologize for not replying. The email was, when are you going to post online the slides that I'm flicking through? And the answer is I'm going to try to get to that today. And then my plan is to basically post all of my slides from the previous year. And so that way then at least you'll have a guideline for what the slides will look like. Chances are I'll heavily modify these or slightly modify these depending on how much time I have before each lecture. I mean literally five minutes before the lecture I was making changes to the slides. It's almost impossible to stop me from doing that. I just love this so much. So because of that I'll be posting kind of a guideline for what the slides will look like in advance. And then I'll come back with something that's more definitive. Okay. So at the end of today's lecture then I'll post all of the week one slides in a definitive way. But I'm also going to post last year's week two, week three, week four, et cetera. Okay. Sound good? Okay. Any questions about that? Okay. Great. Okay. So let me review what we talked about last time. If there are no questions about any announcements, things like that we're going to go straight into the material. Okay. Good. So what we talked about last time was the definition of chemical biology. Chemical biology uses techniques from chemistry, often new techniques from chemistry, often techniques that have been invented specifically to answer problems of biology but not always. And then these techniques from chemistry are used to address understanding biological systems at the level of atoms and bonds. That's the goal of chemical biology, to really understand how organisms are living, how they do the things they do at the level of atoms and bonds. Okay. So I'm really fascinated to know about that hydroxide functional group that donates a key hydrogen bond or provides a key Bronsted acid to some mechanism in an enzyme active site. That's the part that makes me run to work, the sort of the details of this. I basically want to use the arrow pushing that you learned in sophomore organic chemistry to explain biology. And that's the goal of this class. And that's the definition of chemical biology. So last time we learned about two key principles that organized biology. The first of these is a central dogma which provides the roadmap for all biosynthesis taking place inside the cell. Everything that the cell has to synthesize will flow through this central dogma. This is the flow of information for biosynthesis by the cell. So everything that your cells will synthesize is going to be encoded in some way by the DNA inside your cells. Oh, and can I ask you if you have an empty seat next to you to move over to the right just to open up some seats on the edges? Some people I know are coming in from other classes. So, you know, so other classes that are ending about when our class is starting. So if you have an empty seat on your right, if you could just scoot you over and leave seats on the edge, that would be really appreciated. So, the second key principle that we discussed was evolution. Evolution provides a principle that helps us organize fast amounts of knowledge and makes, and really in the end simplifies biology enormously. And it's actually a principle that all of you are going to be applying when you design your chemical biology experiments because I will tell you in advance that I will not accept any proposals that involve experiments on humans, okay? So experimenting on humans is its own special topic that I can actually teach a whole quarter on, okay? It requires ethical considerations. It requires tremendous design considerations. It's non-trivial to sample, for example, a diverse population of humans and ensure that you're getting diversity. So all of those considerations are beyond the realm of this class. So instead what I'm going to ask you to do is experiment on non-human organisms. You might, for example, choose cells from humans or you might choose model organisms. And by choosing those model organisms, you're applying a key principle from evolution which is that that model organism descended from some common ancestor that we share and in doing so acquired the same mechanisms that govern its chemistry and its chemical biology. And so that means if we learn something about this model organism, we can then apply that knowledge to understanding how humans work. Now naturally there's limits to this, right? If your model organism is a salamander and you're interested in understanding how the salamander regenerates its arms when you cut them off, which incidentally would be an absolutely fascinating topic for a proposal, right? There's a limit to how much analogy you can do back to humans, right? We humans don't have that same mechanism obviously. And it would be absolutely fascinating for me to learn from you how it is that you plan to apply the biochemistry that you're learning about stem cell growth to develop say limb regeneration in humans. I would love to learn that, okay? Okay, so evolution is important to us because it provides, it tells us that fundamental processes are more or less the same for every organism on the planet. And I'll be showing you a few examples in the next few weeks that illustrate this universality of chemical mechanisms. In addition, we also saw that evolution is really a tool by which we can evolve molecules to do powerful stuff for us inside the laboratory. And I want to pick that topic up for us today, okay? So I'm going to start there. Any questions about anything that we saw on Tuesday? Okay, now I also got some really fascinating emails from some virologists in the audience to point it out there's actually a picornovirus protein that is known to start with an RNA template and then replicate RNA. And that's absolutely fascinating. I wasn't aware of that. So there are exceptions to what I'm teaching you. I'm going to try to teach you the sort of most general thing. And yes, there will be exceptions. Don't hesitate to point them out to me. I'm fascinated by those exceptions too. Okay, so let's pick up where, okay, before we do, last thought about this proposal assignment. To do the proposal successfully, what you have to do is you have to come up with a novel idea, okay? I will not accept any proposals that don't have something new in them, okay? And I will actually ask the TAs to do Google searches and literature searches and PubMed and other sources to verify that what you're proposing to do has not been done before, okay? So you have to come up with a creative new idea. This sounds daunting. But let me provide some guidelines on how to do this, okay? So the first thing that you need are a series of experimental tools and then knowledge of the problem, okay? So experimental tools I'm going to provide you today. I'm going to give you a toolkit by which you can go out and start to address problems in chemical biology. The second portion, knowledge of the problem. You need to know that actually, you know, there's a key step in limb regeneration that's not so well understood. That second step comes from reading the literature, okay? And the first assignment in this class, the journal article book, the journal article report is designed to help you address this second thing, knowledge of problems, okay? So in doing the assignments that are required for the class, these two things are going to come together, okay? Today we're going to address number one and then item number two, you're going to get by Valentine's Day, February 14th. You'll have a journal article report and then in doing this assignment, you'll be looking at the literature and you'll start to identify problems in the field that interest you, okay? So you'll choose a journal article that's relevant to your interests. I don't know what your interests are. Let's say you want to be a dermatologist, okay? Maybe you'll find a chemical biology report that uses skin cells and looks at say melanoma development in skin cells and looks at it at the level of atoms and bonds. I would love to hear more about that and then by doing this assignment, you'll start to know what are the big unknowns in skin cell tumor development, okay? What are the things that people are fascinated by that are, they're designing experiments to address and you'll have the tools from this lecture that will allow you to address those problems, okay? Sound good? Okay, so how to find the problem. The first thing I need to ask you to do is start reading either science or nature, okay? So I assume many of you are science majors. If you're not a science major, raise your head. Okay, you're a fascinating case. I'd like to talk to you later. So come to my office hours, introduce yourself. Okay, so everyone else is science major. You're going to get a degree in science. I'd like you to read either science or nature pretty much for the rest of your life. Pick one. You don't have to read it both. And furthermore, you don't have to read them all that carefully. Just skim through them. By doing that, you will be an informed citizen, okay? You will know more about science than 99.99% of the people on this planet. And furthermore, you'll learn something about what's really cutting edge. Okay, you only have to spend 10 or 15 minutes flipping through science or nature, just looking at the headlines and seeing, oh, they discovered a new class of quasars out in, you know, some outer galaxy. Just doing that is enough to help you, well, certainly have much better banter at cocktail parties, let's say. Okay? And to me, that's enough. Okay, so this is part of your education. Okay? So start reading science or nature. Simply flip through them. That helps you identify problems. The second way is to look at PubMed or Medline, which are the same things. And I'll be talking some more about PubMed in a future lecture. Okay, so hopefully you already know what PubMed is. Hopefully you already know how to apply it. I'll be showing you how to apply it to chemical biology problems at a future lecture. But these are the two ways that you sift through literature to find stuff that's interesting and that grabs your attention. Because in the end, you want your proposal to be about something that really interests you. Okay? You're going to spend a lot of time on this. Okay? Many, many hours. And if it's not something that totally interests you, that's not somehow related to the bigger picture of your career aspirations, it's not going to be as much fun. Okay? And in the end, if it's fun, you'll do a better job. I'll get a better proposal back out of it. And that's the part that interests me. Okay. Now, I was reading, I chair the admissions committee in the Department of Chemistry at UC Irvine. And I was reading the application essays from all the wonderful applicants who have applied to UC Irvine this year. And I came across this wonderful quote up here. The more you know, the more questions you can ask. And so, those questions that you can ask, those are the questions that you will be addressing with your proposals. So, our goal is to get your knowledge up to the point where you can start asking those questions. Okay? All right. Now, I know this is all very, this all seems very abstract, but it's not going to be as abstract in a moment. Okay? Sound good? Questions so far? All right. Don't be too daunted by the assignment. It will all come together when you're ready. Okay. Last announcement, next week's plan. Next week, we're going to be starting on Chapter 2. Please skim Chapter 2 in advance. Take a look through Chapter 2 even before I get to it. Chapter 2 is a review of arrow pushing. Chapter 1 was a review of the biology you need to know. And next week, we'll be talking about arrow pushing and mechanistic organic chemistry that you need to know to do chemical biology. Okay? So, next week, we're going to have two lectures on mechanistic arrow pushing. Now, here's the deal. I'll be out of town on Tuesday. But I've prerecorded Tuesday's lecture. And so, I'm trying a little experiment this year. I understand that the video from Tuesday's lecture, the last Tuesday's lecture, is already available and is going to be shortly posted online. Okay? So, I will send you the link to last Tuesday's lecture. And at the same time, I'll send you the link to the next Tuesday's lecture. Okay? And so, that next Tuesday's lecture, then, you can watch in your pajamas, in the comfort of your dorm room. Okay? And so, we're going to try that for Tuesday's lecture. I think that's actually, I think that will work. But I'll know very quickly if it doesn't work. Okay. So, and then Thursday, I'll be back. So, Tuesday I'll be at Cal State LA giving a seminar. Thursday, though, I'll be back. Okay? So, good? Okay? All right. So, that's the next week's plan. We're going to be reviewing important stuff from organic chemistry. Mainly, this focuses on structure and reactivity of carbonyls. If you are weak in 51C, please reread this chapter on carbonyl reactivity, structure, things like that. There might be two or three chapters for you to read. Mechanisms involving carbonyls, especially the aldol reaction. Ninety percent of carbon-carbon bonds in chemical biology are made using an aldol reaction. You need to know what an aldol reaction is. Okay? If this word, aldol, is totally unfamiliar to you, then you need to spend a little bit of time this weekend reading about it, getting familiar with it again. Okay? Because I'm going to assume that you know about an aldol reaction when we get to it. Okay? Now, on the other hand, in your review of sophomore organic chemistry, don't get worked up about reactions for the synthesis of carbonyl-containing compounds. Anything that you learned in 51C about how to make a carbonyl using PCC is more or less worthless for this class. Okay? Because PCC is not found in cells. It's totally toxic. And so, good news, as you're skimming through, as you're reviewing, if necessary, don't get too worked up about memorizing a bunch of name reactions and stuff like that. Okay? Instead, focus on mechanisms, focus on the reactivity, understand how carbonyls work, that sort of thing. That's what you really need to know going into the next few weeks of this class. Okay. That was a long set of announcements. But thanks, everyone, for coming out for that. All right. Let's get started on the actual, the new material. I want to talk to you today about combinatorial approaches first. And I'm going to pick up on the last slide that I showed you last time and make sure that I didn't skim through it so quickly that it didn't make any sense to you. And then we'll go on to the next topic. Okay. So, last time, oops, I was talking about modular architecture in organic synthesis. This is a, whoops, that's not what I wanted. Just give me one moment to figure this out. All right. I guess we'll have to live with this. Okay. So, modular architecture is a design principle that allows you to synthesize compounds in a way that allows access to combinatorial libraries. And last time, we talked about this principle of combinatorial libraries. Combinatorial libraries are big collections of different molecules. And in a combinatorial library, you have a different set of modules that are shuffled around and recombined in a way that makes a whole series of different molecules. Okay. And we talked last time about this class of compounds called benzodiazepine. This name should be, the name of this class of compounds should be vaguely familiar to you. This is an important class of compounds that's found almost ubiquitously in medicinal chemistry. And they're used for, amongst other things, antidepressants. So, you could make a combinatorial library based upon this benzodiazepine scaffold by varying the R functionality shown here. And you can do this by a very straightforward synthetic plan that involves the recombination of a ketone together with an aniline. So, this is a compound that has both a ketone and an aniline functionality together with some sort of alkyl halide and an acid, let's just say, acid halide and an amine. And so, these will all snap together to give you this benzodiazepine framework. I'm not showing you the mechanism for this. And it's not so important for our discussion. So, we're going to skip over it. But you can imagine having, say, you know, 20 different versions of this ketone based compound with different R1s and different R2s, 20 R3s over here, 20 compounds that have different R3s, and then, say, 25 compounds that have different R4s. When you put these all together and you would do this in individual reaction flask, you'll end up with a large number of different compounds. Okay, so let's just do 2020, 2020. Okay, so 20 of these, 20 of these, 20 of these. If we make all possible combinations of those, how many compounds will we end up with? How many benzodiazepines? 20 times 20 times 20. 20 to the third, which is 8,000. Thank you. Okay, you guys are scaring me now. Okay, so 8,000 compounds can very readily be synthesized by starting with simply 60 different precursor compounds. And that's pretty powerful. If you have 8,000 different benzodiazepines, each one that is potentially some bioactivity, then that collection could have a lot of very powerful new therapeutic compounds in it, for example. Okay, and then we talked about some other different modular frameworks that can be used. Now, I want to shift gears. That's the example of using combinatorial chemistry in this synthetic laboratory. This principle, of course, borrows heavily from biology. And it turns out that your immune system uses a similar principle to develop diverse molecules called antibodies, which are one of the first lines of defense against foreign invaders. Okay, so if heaven forbid you decided to pick the apple up off the ground over there and start chewing away on it, you would find a lot of foreign bacteria in that apple. And so likely antibodies would play some role in fighting off those foreign bacteria. Okay, so here's the way this works. So antibodies' job is to be binding proteins. Their job is to grab on to non-self molecules. So I'm going to refer to this class of compounds as professional binding proteins. That's what they do for a living, okay? That's their profession. And it's one of the immune system's first lines of defense. Structurally, they look like this. I told you earlier, one convention for looking at protein structures using a ribbon to trace out the backbone. I didn't tell you really what these arrows mean and these curly cues. We'll get to that later. But a different convention for looking at protein structures just maps a surface onto the outside of the protein structure, okay? So if you were able to have, you know, special electron microscopy eyes, you know, eyes that had amazing power of resolution and visionability, what the antibodies really would look like is something like this, okay? So they have this sort of bumpy exterior. Now, the stuff down, and I've colored this antibody to highlight its structural components, okay? So antibodies, it turns out, are composed of a total of four chains. Two of these chains are called light chains. They're shown here at the top in green and then this sort of cyan color and this purple color. And then there's two heavy chains, okay? The details, not so important. Don't get worked up about memorizing how many chains each protein has. Here's what's important, okay? Antibodies have evolved a mechanism that allows them to recognize diverse binding partners. And they do this by having a series of flexible loops that can accommodate different shapes that they need to bind to, okay? So I'm turning now to the very tips, the tippy top of the antibody up here, which is labeled binding site. This is where the antibody will try to attempt to bind to that foreign invader. Let's say you picked up a virus when you bit into the apple, now the virus is floating around your bloodstream. So the antibody is going to attempt to bind to the exterior of this virus. And if we zoom in over here, this is the tippy top. This is just the, this is called the FAB region of the antibody. So the fab region of the antibody over here. And you can see, and then in this van der Waals spheres, this is an antibody binding to a small molecule. So it's binding to some target. The exact target, not so important for us. But notice how the target is cradled in these loops, okay? The loops are gripping this antibody very gently, but it, or sorry, they're gripping this antigen gently, but the antigen is fully buried in these loops. So these loops are flexible to accommodate many different potential binding partners. That flexibility is critical. That means they can recognize, you know, virus one or virus two, or if you go to Ethiopia and pick up some totally different virus, they will also pick that one up too, you hope. And at the same time, these provide enough other types of molecular recognition, which we'll talk about later, that allows strong enough binding to muster an immune response. And then the antibodies basically sound the alarm, the red coats are coming, and get the immune response to go into high gear to start killing off that foreign invader. Okay? So very first line of defense against foreign invaders. Now, the problem and the big challenge is that these antibodies need to recognize stuff that your human organism, you, have never seen in your life, okay? That means that if you travel to India, or you travel to, I don't know, Pellis Verdes, or wherever it is that you travel and you pick up some new organism or some new foreign invader, the antibody, the combinatorial library of antibodies needs to be ready to recognize that. And of course, you know, this stuff has never been seen before. The antibodies have never trained on that. So the antibody, the strategy that your immune system uses is to have a vast collection of potential binding partners, okay? So make a big collection of different antibodies, each one with structural differences, to be ready to recognize any particular type of invader, okay? Now, here's the other thing. So the size of the collection is huge, okay? And these antibodies are produced by immune cells called B cells, which look like this. So, or B lymphocytes. This collection is fairly enormous. It's estimated to be on the order of about 10 billion or so different antibodies, okay? But earlier, I told you that the human genome is only about 24,000 genes, okay? So obviously, there can't be 10 billion different molecules in the immune system, each encoded by its own gene. So instead, the strategy that the immune system has evolved is a strategy whereby different gene segments are recombined in a way that then produces a combinatorial library of different antibodies, okay? So let me show you. So there are 40 of these variable genes, V modules, 25 diversity modules, 26 joining modules, and they're shown here. So here's the V genes, the D and the J genes. And then by combinatorial gene assembly, these are brought together to encode the antibody heavy chain gene, okay? So that encodes the heavy chain that I showed on the previous slide. Similarly, the light chains are produced by another type of combinatorial gene assembly whereby one of these V's is picked out and et cetera. And one of the D's is picked out, et cetera, okay? So in doing this, you can get a very vast library of different antibodies. Furthermore, the antibody diversity pool is further diversified by a series of genetic manipulations that includes variable gene joining. So when the genes are joined together, they're not sort of glued together neatly. Instead, there's little parts that are clipped off or added in. And then furthermore, there's a process called hypermutation that goes through and makes tiny little mutations in the encoding sequences as well. So in the end, you end up with around 10 billion or so different antibodies. Each one different structurally and potentially able to recognize whatever foreign invader you happen to counter during your life. Okay, it makes sense? Okay, so to summarize, what we're seeing is a strategy for combinatorial synthesis that's used in the laboratory and also used by your cells. Okay, in both cases, there are these modules that are shuffled around and then rejoined in literally random fashion to give us a vast collection of different molecules. And then we hope that these different molecules are going to be functional when the time comes that we actually need them. Okay, makes sense? Okay, yeah, question over here. The diversity and mutations that would cause a lot of mean diseases because there's so many of them and it'll sometimes react then against us because there's so many. Okay, yeah, so there's a separate process that subtracts out things that recognize self as well. Yeah, that's an interesting question as well. So, yeah, thanks for asking. What is your name? Joshua. Joshua, okay. Okay, changing gears. So, the last topic in chapter one is a survey of the tools that we need in chemical biology to be able to address problems and address the frontiers of chemical biology. So, I'm going to have a very quick survey in the next 50 minutes or so. I'm going to share with you a series of different tools that you can then use in your proposals. Okay, so think of this as you're trying to put together your toolkit. This is going to be the hammer, the saw, the nail gun, whatever. Okay, so these are the things that you need to address to design experiments in chemical biology. Okay, so again, this is useful for planning your proposal assignments, but this also provides a toolkit for further experiments. We're going to be referring to this toolkit quite a bit in this class. So, later in the quarter, I'll be able to say, oh, yeah, you remember those antibodies that I mentioned earlier? Those are now going to be in your toolkit. Okay, this toolkit is very diverse and vast. It ranges from chemical reagents to entire model organisms, and there's a huge amount of diversity in that range of different tools. So, chemical biology as a field uses all kinds of different techniques. It uses techniques from molecular biology. It uses techniques from the very latest in nonlinear optics into image cells and everything in between. Okay, in addition, I also want you to know these tools because I want you to be able to design experiments on the fly to determine, you know, X. Okay, and a very common midterm question for me would be, how would you design an experiment to address, you know, what kind of signaling or chemical signaling is being used by the gut bacteria, your gut bacteria to let their neighbors know that sugar has arrived? Okay, which is actually a pretty interesting question. I'd like to know how you do that. Okay, in addition, I want you to know how to describe negative and positive controls. We're going to be talking about experiments, and all good experiments have both negative and positive controls. So, why don't we talk about that topic first? Okay, so if you're going to be designing experiments, you need to know first what a negative control is and what a positive control is because you need to be able to design these into any experiment that you want to design. Okay, so good experiments have both a positive and a negative control. Positive control first. A positive control is a set of experimental conditions that provide an expected response or a positive result. Okay, so in this case, you could basically want to know, does the conditions in my flask produce, you know, produce say amplified DNA or something like that, and so what you'll do is you'll start with a sample that you know should work a certain way in your experiment. Okay, it should give you a predetermined result, and it should be completely consistent every time. It should be very, it should give you that expected result every time. So, this tells us that our experimental apparatus is working. Okay, and you need to know this because oftentimes, the experimental apparatus in chemical biology labs isn't simply a stirrer and, you know, a hot plate where you can just test the hot plate by sticking your fingers on it for nanosecond. The chemical apparatus might be, you know, a tiny little micro centerfuge tube, and you've shot in a bunch of different reagents, you know, 10 different reagents, all of which are clear, none of which you can really assay all that readily. So, what you do is you set up a set of conditions where you know the result, and then you see if that result is recapitulated under your experimental conditions. Okay, so this is the positive control, and you always want to have one of these. Good experiments have positive controls. Good experiments also have negative controls. So, this is where you leave out some experimental condition in your experiment. Maybe you leave out the test sample. Okay, so earlier I was talking to you about trying to assay, let's just say, some sort of microorganism found in your stomach that responds to the presence of sugar. Okay, and maybe you want to know whether that microorganism releases indole to signal to its neighbors. Okay, actually that's not a bad experiment. So, your experiment, your experimental apparatus will be measuring the concentration of indole, your positive control will be, say, some bacteria that you know releases indole, and that tells you whether or not your experiment is working. The negative control can be entirely missing the bacteria. Okay, so you do the exact same experiment, but you leave out the bacteria, and no indole should result. Okay, if you see indole resulting, that tells you that you have a problem. That tells you that you have, say, a contaminant, for example. This should result in a failed experiment or a negative result. So, it's experimental condition missing a key element, say, the test sample, the thing that you're trying to test. Okay, and again, it should result in a failed experiment. If it does not result in a failed experiment, that tells you that in your conditions, you have some sort of source of contamination. You absolutely need these negative controls, okay, because all too often in chemical biology, we have lots and lots of contaminants, and there are lots and lots of false positives, and we just don't like that kind of thing. You want to know that if you're going to tell your friends down the hall that you discovered a new base in the DNA sequence, you want to know that actually that's the real thing, okay, that you're not telling your good friend something that turns out to be totally wrong later, and it makes you look stupid, because no one likes to look stupid, okay? Now, because we have very complicated experiments in chemical biology that involve lots and lots of variables. Remember I told you earlier about the one that has 10 different things thrown into a little tiny micro-center-fuge tube? We often have multiple negative controls, one for each possible variable, okay? So for example, you might leave out the magnesium from the buffer, just to know, does the magnesium contribute to this experimental result? You know, is this actually a magnesium-dependent enzyme that produces the indole as expected? If you leave out the magnesium and you still are getting some result that could tell you that maybe it's not a magnesium-dependent process, okay? So negative controls tell you a lot about what's going on in your experiments, okay? And a good experiment should have both negative and positive controls. Any questions about what positive controls are, what negative controls are? Yeah? So if you like give you positive control and you pass the negative control to the 45. Okay, this is a great question. It happens to be all the time. Okay, so the question, what is your name? B. B? B. Okay, so B's question is if your positive control fails and your negative control works, what does that tell you about the experiment? I would say that that tells you that your experimental conditions are worthless and you cannot interpret the experiment, okay? Because if the positive control fails to work, then you really don't understand what's going on in your experimental condition, okay? The positive control really tells you whether or not you understand all of the elements that compose your experiment. If the negative control fails as you expected it to fail, well, maybe it's failing because the positive, for the same reason that the positive control failed. Maybe you left out some key reagent, right? You know, maybe you didn't heat it up to the right temperature and hold it there for long enough or something, okay? So both your positive control and your negative control have to work in order for you to interpret the results. Okay, now I'm being really dogmatic here. I will tell you that we scientists oftentimes look at experiments that don't necessarily have every control working, okay? I'll look at those. My students will show me those all the time. I'll look at them. But I'm not going to, you know, call up the, you know, the Nobel Prize Committee in Stockholm and tell them about it, okay? Because it's probably not worth a lot of time. But we'll use that to guide the next set of experiments. We'll say, well, what is it that the failed and the positive control? And then we'll design and troubleshoot and design the next experiment using that information. We'll look at the negative control say, oh, yeah, that failed, that failed, that failed. So these variables are probably okay. What about this one? Okay? So you can get a lot of information from experiments that fail. In fact, you absolutely, to be a successful scientist, you need to learn how to work with experiments that fail because 90% of the time they fail, okay? But, you know, that's the way life is. So you learn as much as you possibly can and then you move on. But to make strong conclusions, though, you need experiments where both the positive control and the negative control are working as expected, okay? Okay, good question B. Other questions? All right. Let me show you an example. Let's imagine that you wanted to amplify some DNA sequence using a technique called PCR. Details not so important now. Hopefully you already know what PCR is. I understand it's taught in high schools now. If not, you can look it up in the textbook. If not, don't stress about it. I'll talk about PCR later. Later, you'll need to know how this works. For now, let's just use it as a method for amplifying DNA, okay? And furthermore, here's a method for visualizing DNA as bands on a gel. And I know all of you have done TLC. This is kind of like TLC except the bands are upside down, okay? But it's more or less, it's like upside down TLC. It's more or less the same technique that's used to visualize compounds except we're visualizing DNA by running it through an agarose gel. Again, if that technique is not familiar to you, don't panic. We'll talk about that later in this class. For now, we have a method for amplifying DNA. We have a method for visualizing the result in DNA, okay? Now, here's our positive control. It's the lane over here that's labeled with a plus, okay? So over here is a set of conditions that you know results in DNA. And notice that there is a band right here, big bright band, okay? So that tells us that our positive control works. You have a sample of DNA that you know should amplify under that set of conditions. And lo and behold, it gives you that nice bright band. Next lane, the next lane are the negative controls, okay? So we don't see that same band. Say that is missing the DNA sample, okay? We don't see that same band so we don't have to get worried about it. Final lane, this is our experimental lane. Okay, you do these two experiments, the positive and the negative control, just to see whether your sample over here is working, okay? And here's the one that has the actual test sample. And notice that it gives you DNA and it turns out this technique separates on the basis of size. It gives you DNA of a different size, okay? So we have both a positive control that works as expected. We have a negative control that works as expected. And then we have our experimental one. In a typical experiment in my lab, we'll have six or seven negative controls and maybe two positive controls just so that we know what's going on. We don't, we cannot visualize what's going on so we need all of these controls to follow what's actually happening in the test tubes, okay? Or sometimes even smaller than test tubes, okay? Sometimes we're even down at a single molecule level. So we really, really need all these controls, okay? I want you to be thinking about these controls when you design your proposals, okay? Good proposals will have both positive and negative controls. How you design your experiments and how you discuss them with me will in the end determine how creative they are and how robust they are and how likely they are to stand up to scrutiny, okay? If you want to propose something that's totally wild, like, I don't know, time travel or something like that, I will discourage you. But let's say you want to propose something that's not quite so wild, okay? But you come up with a whole bunch of controls that will really tell us something about whether or not your experiment is working. I'll go with it, okay? So be as creative as you possibly can be, okay? I'll look forward to reading those. All right. Let's talk about tools. So the first tool that's used quite extensively in chemical biology laboratory involves dyes that are turned over. These are, these color metric indicators as they're, as they're termed have been used for hundreds of years, probably at least 120 years in chemical biology experiments, okay? They're used for all kinds of things. They're used to stain cells. They're used to follow enzyme reactions. And here is one example of these dyes. If you have some sort of enzyme in your reaction that you're trying to assay and the enzyme somehow cleaves this ether bond, what will happen? One is this will then release a phenyl, a nitrophenolate molecule shown here. This nitrophenolate is a nice yellow color, okay? So you can very clearly see. This one is clear. This one is yellow, okay? So everyone can see that difference? Okay, so if enzyme is present and enzyme is functional, you get a nice yellow color from the, from this solution. Okay, now this, this is really powerful, okay? This gives you a way of turning stuff that you can't see and to stuff that you could then visualize, okay? And furthermore, this is typically quantitative. In other words, you could pass light through here, see how much light gets absorbed. Say you pass visible light through here, see how much light gets absorbed, and use this to quantify how much enzyme is present in your solution. Okay, doing this gives you a really effective way at addressing things like enzyme kinetics at, you know, different properties. You can look at, say, binding between receptors and ligands using this type of technique. So this is bread and butter of chemical biology labs. Okay, B, you have another question? Okay, yeah, so B's question is, how do I know the concentration of the enzyme in this reaction? How do you make it quantitative? Okay, so what you will do is you'll have a series of controls where you have a known amount of enzyme that's turning over this dye, and then you see how yellow it gets after five minutes with that known quantity of enzyme. Okay, and then you could use that to calibrate this experiment. Okay, so yeah, so there's subtleties to everything I'm telling you. But this isn't too hard. Okay, thanks for asking. Other questions? Okay, so in this example, we're looking at light that's absorbed, and then this absorbance results in the molecule radiating out the energy that's of the photons that it's absorbing as heat. Okay, in a different experiment, the light is absorbed, and instead of the energy of the photons being radiated out as heat, instead, it's blasted out by the molecule as a photon with a lower energy. Okay, so it has a different wavelength of light that's being given off. Okay, so here's a series of different molecules that have that property in that they absorb photons and then radiate back out photons of lower energy. These are used in fluorescence experiments extensively in chemical biology. These are used to visualize molecules inside cells, inside organisms, and a whole host of different experiments. Okay, so I already told you this, fluorophores absorb photons of light and emit a photon at a lower wavelength. Okay? You can select in your microscope just those photons at that lower wavelength by setting up a filter. Okay, so the way this works is if your fluorophore, let's say this fluorescein over here. So here's your fluorophore. It's going to give you this greenish colored light. And in your microscope, you will have a filter that filters out all other light. Okay, so this prevents backscatter except for light of this wavelength. That is this nice green color. That will give you exactly where this fluorescein molecule is binding inside the cell. Okay? Furthermore, this technique is extraordinarily sensitive. It's one of our most sensitive techniques in chemical biology. Subplanted only by the thing that Meriam is working on. Okay? So Meriam is doing something that's going to be even better. But for now, up until say two years ago, this was the champ. And you can get down to single molecules under the right conditions using fluorescence. You can actually see one fluorophore fluttering away as it's releasing photons. Okay? Pretty amazing. Okay? I will tell you that those right conditions, completely non-trivial. Okay? It takes a cooled CCD camera that's very, very large and very expensive. This is not like your cell phone that's hooked up to the top of the microscope. This is a really kind of a very special type of camera to visualize this sort of thing and pull up enough photons. But in the end, this is really powerful stuff. Because if you visualize just one molecule inside the cell, then you could start getting at processes that really govern how cells work. Where cells are oftentimes responding to a low number of molecules inside them. Okay? So this is a really powerful technique. It's used for all kinds of things. In this example, I'm showing you two cells that are dividing. And they're being pulled apart by these spindles over here. Sorry, the DNA in blue is being, or in cyan, is being pulled apart by the spindle apparatus into the two daughter cells. And the actin, which is the protein scaffold of the cell, kind of the skeleton of the cell is highlighted in a red over here. Okay? Absolutely spectacular, stunning imagery really, that you can find examples of where this technique is used. This is completely ubiquitous. This technique is used for visualizing stuff inside the cell. It's used for visualizing stuff outside the cell in little tiny reaction flasks for doing screens of drugs, for doing phenotypic assays of cells as well. Okay, and question over here. Does this technique basically have a fret? Yeah, so the single molecule technique that I described would use a fret. So, thanks for asking. Other questions? Yes, over here. And what is your name? What is your name? Chelsea. Chelsea. So, basically these small molecules are made so that they can bind to a specific part of the cell. Ooh, Chelsea's question is a really good one. Okay, so Chelsea's asking, you know, why should, you know, this dye bind to the DNA over here and nowhere else inside the cell? Later we'll be talking about the dyes that bind to DNA and what makes them special. But you're absolutely right. They need some way of getting guided into the cell. So, for example, these actin, the red color of the actin, I believe is an antibody that binds to actin. Okay, so that's that big molecule that I showed earlier. That antibody is then attached to this rotamine. Okay, so rotamine is attached to the antibody. The antibody that's being used is specific for actin. It binds to actin. It's a professional binding protein that was raised just to bind to actin. And now it's going to highlight all of the actin in the cell in this rotamine red color over here. Okay, really cool stuff. So thanks for asking. But you have to have some other technique that will target the fluorophore specifically to what it is that you're lighting up inside the cell. Okay, great question, Chelsea. Other questions? Okay, so again, totally ubiquitous technique used very extensively. I imagine every single one of you will have some experiment in mind that will use either fluorescence assays or colorometric assays of your molecules. Okay, now here's the deal. We can expand these up. I've shown you two different assays. We can expand these up to look at literally thousands of molecules a day and thousands of conditions a day using, for example, micro-titer plates. Okay, so these are plates that are about this big. So they're not that big and they're standardized. And they have a standard number of wells on them. So the ones my lab uses are 96 or sometimes 384 wells per plate. That's this big. But it's not unusual to have wells to have 1,536 wells in a little space that's about this big, okay, where each well is, you know, say 10 microliters or something like that. Okay, but what that means then is on that plate you can assay 1536 different conditions. Okay, so that's 1,500 different conditions. Okay, maybe 50 of those are different controls, negative controls, positive controls. But you're still looking at a huge number of different molecules, of different other variables that you're testing in that one little tiny area. And it's not infrequent for me to visit places where they have a whole room this size filled with robots that are pipetting, that's this technique over here, pipetting on an automated fashion reagents into these tiny little plates. And then the robot has like a little, you know, arm that then brings it into a reader and the absorbance is read out automatically. And all this data is then ported to your desk and appears on your laptop. Okay, very cool, isn't it? Okay, so yeah, it's a great time to be alive. Okay, so this absorbance we talked earlier how it can be used for quantitative analysis. Oftentimes we rely on antibodies to bind with specificity to a particular molecule. This is the question that Chelsea was asking. It's not unusual for us to actually add an antibody that's specific for some target inside the cell. Okay, and so we're doing this so that we can actually look at just that individual protein. And I showed you earlier the structure of antibodies. That structure allows them to be very, very specific. If the antibody is attached to an enzyme, then you can look at turnover of a dye. And that can be, that can visualize the presence of a molecule as turnover of a dye. Okay, everyone still with me? Make sense? Okay, and the scope of this is enormous. Pharmaceutical companies will screen through half a million compounds in two weeks using techniques like this one. Okay, and there might be two humans that are involved in those experiments, both of whom are keeping the reagents and the robot happy. Okay, turns out actually programming the robot, not as trivial. So, you know, it's very different than telling undergraduates, okay, I want you to pipe out all these things. Okay, this is much more industrial scale. Okay, and it's used very routinely in chembiolabs. Okay, sound good? All right, let's move on. Another very powerful technique that's used quite routinely is basically a Darwinian evolution technique where you can evolve organisms that can accomplish some chemical goal. For example, over here, this is an experiment to find mutant bacteria that can take advantage of iron and metabolize this iron. So, in this plate over here, this left side is the negative control. These are bacteria that you don't expect that were not mutated and on the right side, so you do not expect them to be able to handle the iron. And on the right side, these little circles are examples of the colonies of bacteria that can take advantage of the iron and actually accomplish their metabolism. On the right side, here's in B, panel B, this is a different experiment where you're looking for bacteria colonies that can produce lycopene. Lycopene is the red dye that's found in tomatoes. It's the reason why tomatoes are red. And it also is thought to have some anti-cancer properties, although that evidence for that is not as well supported. But in any case, you can imagine evolving the bacteria, putting in the genes that encode lycopene production, and then evolving the bacteria to produce this red-colored dye. And then at the end of the experiment, you'd go in and simply pick out the reddest of the colonies over here. Now, if you look closely at this, there's some really, really, really interesting stuff going on. Okay? Do you notice how some of these are kind of modeled in appearance? This one has some little red dots, and then it looks mainly clear. What's going on there? That's absolutely fascinating. Okay? I'd like to know more about that. So the essence of being good scientists is not simply running experiments. The essence of being good scientists is designing good experiments and then observing the results like a hawk. Okay? You have to look at these things intensely, intensely, intensely, and ask questions. Why is there a white halo around this one and then a red inside? What is different between the bacteria here and the bacteria out here? Maybe it's a trivial reason. Maybe these guys have had more time to produce their lycopene, and these guys are just, you know, they haven't grown as long on the outside. But you still would want to know that. And so being a scientist is all about designing good experiments, and then next, observing, observing, observing. In making those observations, that's where we make progress in science and where we make progress in chemical biology. Okay? Sound good? All right. Oh, I didn't tell you about the Darwinian evolution. You can imagine getting a bunch of mutants, picking out the winners over here, mutating them again, pick out the winners, mutate again, pick out the winners. That's the same process of evolution that we talked about on Tuesday, where you diversify the pool, select for fitness, keep doing the same thing again and again and again, until eventually you have some super growers, ones that can grow really, really fast under those conditions. Okay? And that would be really interesting to understand at a molecular level what's going on there and what's allowing them to do that. Okay. Viruses are very powerful tools for gene delivery. They're very efficient and affecting cells. I'll be showing you an example of viruses in action in just a moment. My laboratory grows large quantities of viruses as a tool for chemical biology. Their major goal in life is to make copies of themselves. That's what they do. Okay? They have a very short lifetime and during that time they are totally fixated on making new copies of themselves. Because they have such short lifetimes and they're so ruthless at amplifying themselves, this provides a very powerful tool for selections. Okay? Let me show you an example of this. The example is using a technique called phage display, which again is applied by my laboratory and many others. What we do is we start with the filamentous virus. Okay? So each one of these little hairy things over here, each one of these thread-like things is a single virus. And the virus, this particular virus, infects E. coli. Okay? So like all viruses, the inside of the virus encapsulates genetic material. In this case, this virus encapsulates DNA. There's other viruses that are RNA-based. This one happens to be DNA-based. Okay? Now here's the great part. As a chemical biologist, we can go in and manipulate the DNA that's found inside the virus. When we do this, we can coax the viruses into producing large numbers of different viruses, each one with a different protein displayed on its outer surface. Okay? Each one with a different protein outside on its outside. Okay? That's called displayed. Okay? And then you can do selections. So for example, you have say a billion different viruses, each one with a different protein displayed out here. You can then throw these viruses at a chemically modified surface down here and then simply pick out the winners, the ones that can grab on to this chemical found on the outer surface over here. Everything else that can't grab on is washed away. You wash this away using some sort of buffer. Okay? So you just flow water over this for five minutes. I guarantee you two, everything that's a weak binder, everything that can't really get a good grip on the chemically modified surface gets thrown in the trash. Okay? And then you start amplifying up those winners and then you do the process again. And then you do the process again and again, like four or five times. By doing that, you start to get very tight binders to this chemical found on the surface of your, that you're targeting. Okay? So this is a way of starting with literally 10 billion different molecules and coming down and identifying just a few that do something special, such as bind to this chemical over here. Okay? Question over here? Seeing a virus so small, how can you pick our absolute virus? Yeah, yeah. Okay, that's a great question. So how do you even manipulate these viruses? So what we do is we infect back their E. coli hosts and then we can make colonies of those E. coli that are infected where each colony has one and only one type of virus inside of it. Okay? And then you can actually see the virus there. Okay? But after scanning for the virus that can attack your virus? Yeah. Yeah. And you can infect that virus to the E. coli? Yeah. Let me show you on the next slide. Okay? Great question. Okay. So the question is about the particulars of how this technique works. Again, here's the viruses over here. Here's the size of our library that's around 100 billion or so. That's the maximum size that we can make. Notice that this, in this electron micrograph over here, there is a little cluster of grapes at one end of the virus. That's its head. That's what it uses to grab on to the E. coli that it's going to infect. Okay? So that's this part up here. Okay? That's the head of the virus that cluster grapes. And again, the DNA is stuffed into a long pipe of virus over here. And the virus is very flexible. Okay? So this virus is like a hose in terms of its flexibility. Okay? Now, here's the experiment that I was getting asked about earlier. So what you do is you make your library of different viruses, each one with a different protein displayed out here. And then you throw those viruses at some target, Pac-Man. Okay? This Pac-Man shaped target that happens to be stuck on the surface of, on some sort of surface. Okay? You then select all of the things that bind to Pac-Man and wash away everything that doesn't bind. Okay? So in this step, you go from 100 billion down to just say, let's say a couple hundred. Okay? And then you pick out these viruses, you amplify them up in their host, E. coli, and then you do this again. Okay? So again, we target Pac-Man. Wash away the non-binders, amplify up the binders, wash away the non-binders, amplify up the binders. And you just keep doing this a bunch of, bunch of times. Okay? At the end of it, you'll end up with, say, let's say 50 to 100 that bind really well to the targeted Pac-Man shaped molecule. Okay? So now you want to go in and you want to look at those individuals and see which one binds the best. I think that's your question, right? Okay? So what you do is you infect the winners into E. coli. This is a bacteria. And then you, you can plate out bacteria such that you end up with colonies. Okay? That was shown over here. Each one of these dots is called the colony. These are genetically identical bacteria. In the case of, of virus infected bacteria, each one of these colonies will have a different virus in it, a different bacteria phage in it. Okay? And then you can assay each one of those individually. Okay? It turns out that this principle of vast libraries of proteins that are displayed on phage is also applicable to DNA and RNA. And this is another tool that's used routinely in chemical biology laboratories. So my colleague, Professor Andre Luktak, for example, routinely makes huge libraries of RNA and then selects for binders from this big library. So here, for example, is a derivative of rodamine, a molecule that I showed you earlier. And here's an RNA sequence that likes to bind to this rodamine-like molecule that I showed earlier. So you can select for binders to all kinds of different things from these vast pools of both DNA and RNA. Okay? Using exactly the same principle that I showed earlier. You attach this molecule to some surface. You throw at that surface the big pool of say RNA, wash away all the nonbinders, grab onto the binders, amplify them up, repeat the process. Okay? So it's simple molecular evolution. Okay? Exactly like the evolution that we talked about on Tuesday. Now, the reason why this is important, it's important to apply this evolution is you cannot know in advance exactly what sequence is best going to bind to some complicated molecule like this. Okay? I know it would be really cool if I could sit down with my laptop and, you know, crunch some numbers and at the end of that get the perfect RNA sequence. But we chemical biologists can't do that. Okay? We just don't know what are the design rules for designing something that has a pocket shape like this. And furthermore, what are the functionalities that we're going to need that will be complementary to the partial positive charge over here, the lone pairs on oxygen, the aromatic over here, et cetera. It's better just to go out and do the experiment and just see what you get and then analyze what you get at the end of it. Okay? It makes sense? Okay. So that was an example in your toolkit of using libraries both on phage, libraries that are DNA or RNA. The next thing in your toolkit are small molecules. So small molecules are used extensively in chemical biology. So some of these molecules are antibiotics. Some of them are natural products that are found in, that are being produced by microorganisms as they fight off their invaders. But others are discovered in chemical biology laboratories with a particular function. Okay? And so these molecules are used quite extensively both in chemical biology laboratories but also cell biology and biochemistry labs. So for example, yesterday I showed you the pathway of the central dogma which is the information pathway for biosynthetic information inside the cell. Small molecules such as the ones shown over here are known to inhibit pretty much every step of this pathway. And so on the shelf you can have molecules that would say disrupt the process of translation like cyclohexamid shown here or other molecules that disrupt transcription such as amantine, amantine alpha amantine shown here. And these are molecules that you can buy from your chemical supplier. Okay? So these small molecules give you tools to shut down specific events inside the cell. Okay? Now what's so powerful about this is you can control the dose, the location, the time of delivery, et cetera with perfect control over those type of things. Okay? The dose is simple, right? You add the exact concentration of the small molecule you want. And where this is important is that also controls the percent of inhibition that you're doing. Okay? So let's say you want to shut down a little bit of protein translation but not all protein translation. Maybe you don't use a huge quantity of cyclohexamid over here. Maybe on the, more likely though you just want to shut down all protein translation so you add a large concentration of cyclohexamid. In addition, you can control the location. So you can deliver the molecule to some space, some, let's say you're looking at an organ under the microscope and you want to know, you know, what happens if I shut down protein synthesis on this part of the stomach but not this other part over here, you can dose that part of the stomach and then leave the other part undosed. In addition, you can control the time of delivery, right? You can say, look at, if you're looking at circadian rhythms inside, I don't know, inside your neural cells, right? Circadian rhythms are the timing of clocks that is used by organisms to coordinate their day. You might be really interested in knowing what happens if I shut down transcription at, right before the organism goes to sleep. So being able to add the small molecule at a precise time in a precise location with a precise concentration is really powerful and it's one of the reasons why small molecules are so important inside cells, inside chemical biology and cell biology labs. Okay, any questions about what we've seen so far? Okay, I've shown you a whole series of different experiments that you can do and you should, you can plan to do. I want to show you next the players that you're going to be using for designing your proposal ideas. Okay, you're going to be using model organisms because, as I told you earlier, I don't want you to plan experiments on humans. Okay, that would not be the point of this course. Okay, instead what I'd like you to use is model organisms or samples that are obtained from consenting human adults. Okay? Okay, so in general though, when you're choosing a model organism, you want to choose one that grows easily, that's easy to study, that grows quickly and has some relevance to human biology. Okay, not every model organism is going to be so great. If you want to study, say, you know, the hearts of Burmese pythons and Burmese pythons take years to grow or something like that, it might be a very long PhD for you or your students and no one likes that. Okay, so you want to choose organisms that grow quickly, that are inexpensive to grow, that don't require really exotic conditions to grow. You know, if you have to feed your Burmese pythons rabbits every two weeks or something like that, it's going to be expensive and it's also going to be a lot of hassle. And so you need to have some really good reason to have chosen Burmese pythons as the model system. In general, these are the model systems that we use in chemical biology laboratories with the exception of humans down here. I'm just listing this for point of comparison. Okay, so I will step through each of these and tell you about their properties. Okay? So for example, I've shown you earlier use of this bacteriophage. This is a virus that only infects E. coli bacteria, hence the name bacteriophage. So it's a virus that eats, phage means to eat bacteria. And this only infects E. coli. This makes it very convenient for us to use in the laboratory because we don't have to worry about if it escapes, quote unquote escapes. We don't have to worry about infecting my coworkers, the graduate students and postdocs in the lab. Furthermore, it has a very simple genome. It just has 11 genes in its genome. That makes it easy to manipulate. Okay, this reference here is to the picture that I'm showing you and I showed earlier in the class. Okay, it's the electron micrograph. In addition, it grows in E. coli. Let me show you what E. coli look like. So here are E. coli next to a red blood cell. Let's see, is this right? No, sorry, this is next to a macrophage. So these are the cells in your immune system that are charged with eating E. coli or other foreign invaders. Okay, so E. coli is on the order of about one micrometer in scale and each human cell is on the order of 20 to 30 microns in scale. Okay, so that gives you kind of an idea. And I think this picture dramatically illustrates the relative scales. This makes sense, right? E. coli are prokaryotes. I showed you structures of pyrrokaryotes last time. Human cells, of course, are eukaryotic cells. They're a lot more complicated. They have a lot more organelles inside them, et cetera. Okay, so classic experiment in biological history. This was, this is Griffith at the top, or that's Fred Griffith at the top with his dog Bobby. Okay, I always like to know the names of scientist dogs. Fred Griffith learned to recognize R pneumococci and differentiate them from S pneumococci. So R equals rough, S equals smooth. And he found that dead S pneumococci could transform live R. And Avery, this guy down here working at Rockefeller, showed that if you isolate the DNA from the dead S bacteria, it could transform the R bacteria into S. Okay? So the important idea there is that it showed us that DNA was the hereditary unit of the cell, that DNA was encoding the machines inside the cell that were making the outer surface either smooth or rough. Okay? Sad history here. Fred Griffith died in the, when the Germans were bombing London, he died in the London Blitz. Okay, so E. coli extensively, extensively used. I've shown you a couple of examples including phage display today. Yeast are used as a model system for very simple eukaryote as, you know, equivalent to the prokaryotic E. coli, but very simple to grow, very easy to genetically manipulate, et cetera. As things get more complex, we get towards organisms like fruit flies over here. Fruit flies are used extensively in laboratories because they grow quickly. And you can do selections for things like morphology, shapes of wings and things like that, but then even more complex traits such as behavior. And I will show you one example of this. This is one of my all time favorite examples. This is the great O. Reade Haverline, a professor at UCSF. And in this experiment, the Haverline lab has built an apparatus that they call an ennebriometer. Okay, so this looks at drunk fruit flies. Okay, so here's the way this works. This bottle over here contains ethanol. And then she pulls a little bit of a vacuum on this so that the vapors come off, or she blows air over the top of this, so that vapors of ethanol come off over here. And then she applies a bunch of different fruit fly mutants to the very top of the column. Now when fruit flies land on these cones over here, and the cones are made out of like a little wire, the fruit flies grab onto these things. Okay, that's what fruit flies like to do. They like to perch on things. But now they're being washed over with this ethanol vapor. Okay, so the alcohol is coming over them and they're inhaling it. They can't get away. And so as they start to wobble back and forth, they fall down to the next cone. And then they grab on again. But then they start wobbling around as they get shrunk from the ethanol and they drop down to the next one. Until eventually down here, they totally pass out. Now the wild type fruit fly over here takes 20 minutes to come through this column. Whereas there are mutants that the Haber-Line laboratory found that only took 10 to 15 minutes to get through the column. In other words, those were fruit flies that were getting drunk and passing out faster than the other fruit flies. So the chemical biology part of this experiment would be to understand what genes are involved and then at the level of atoms and bonds, why those genes are making the fruit flies drunk faster. Okay, now I do have one request. Please do not plan your chemical biology proposal using an anibriometer. I have seen every variant of this with marijuana smoke, with all kinds of, you know, things that cause all kinds of interesting effects. So use any other experiment. But what I like about this is I love the experimental design. It's very straightforward. Any one of you in this classroom could have invented that. And that's what I'm going to be looking for when I look at your proposals later in the quarter. Okay, I'll see you a week from today. Back in this lecture hall, we'll be talking about more model systems. And then we'll be talking about arrow pushing.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Instructor: Gregory Weiss, Ph.D. Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:03:11 Our Story Thus Far: Principles to Organize Biology 0:18:39 Modular Architecture Allows Combinatorial Synthesis 0:30:40 Common Tools in Chemical Biology 0:47:04 Fluorophores Allow Visualization of Molecules Inside the Cell 0:52:24 Assays to Detect Molecules in Solution and Cells 0:58:41 Viruses for Gene Delivery 1:02:18 Phage-Displayed Protein Libraries 1:04:55 Vast Libraries of DNA and RNA 1:07:09 Small Molecules Provide Control over Cell Processes 1:10:26 MOdel Organisms for Biology and Chemistry 1:13:41 Bacteria Used to Define DNA as Responsible for Transferring Heredity 1:15:32 Fruit Fly (Drosophilia)
10.5446/18860 (DOI)
I'm going to run the class as follows. I'll have the most important announcements at the very beginning of the class. So I'll be talking about stuff like what's covered on the midterm, what's expected from your proposal assignment, et cetera, at the very beginning. So you definitely want to show up on time, show up early, get a seat, be prepared because most important stuff is going to be in that first five minutes, okay? Oh, and by the way, feel free to interrupt if you have any questions, okay? So don't hesitate to interrupt if anything comes up. Okay, so some announcements today. And again, the announcements will come out at the very beginning of each class. Our reading assignments this week, I'd like you to obtain the textbook. It's available in the bookstore. There are a big stack of them when I visited last week. They ran out? Oh, well that's good. I run out. Okay, if they ran out, amazon.com has them on sale and you can get them delivered very quickly, okay? And I know for a while, amazon was selling them at some ridiculous discount. So I know because as one of the co-authors, I'm very interested in how they're selling. Along those lines, as one of the co-authors, I'm planning to donate the profits from the book to anyone in this classroom back to UCI to support research and chemistry, okay? So I'm requiring a book that I wrote. I'm obviously aware that I'm going to profit from that. The profits will go back to UC Irvine. Okay, so if you have a copy of the course reader from previous years, please throw it away, okay? It's not going to be any good. I mean, it's good, but I've changed the material quite a bit and the textbook is significantly improved. The problems are slightly different. I think it's, the figures are much better, et cetera. And of course it was edited. So the course reader from previous years is not going to carry you. You need to buy a copy of the textbook. So Natalie, how does the sound sound? It sounds great. And I'm sorry, just one quick announcement. I know this is a tiny room. It's going to be difficult. So we can just work on not having to come back here since I have like 10 minutes to set up and just go turn through the classroom on that side. It would be super helpful to see a kind of paper mold space. But then if it is so, to you that probably you could follow the day after your lecture, all of these lectures would be available on YouTube. So if you can bear with all of my equipment, then you can watch these and enjoy them as many times as you want. Thank you, Natalie. Are you coming? Yes. So yes, they will be posted online for you. So you can enjoy them and study from them, et cetera. The goal here is that UC Irvine is one of the very first universities to have both a lecture class and a laboratory class in chemical biology. We started these back in 2000 when I was an assistant professor. And since that time, we've obviously built up quite a bit in terms of our sophistication of presenting the subject. And so my goal is to really bring that level to other universities around the world and around the country. So anyway, that's why we're doing this. But it also has some benefits to you as well. Okay. So reading assignment for the first week, read chapter one. I'm going to be covering all the material in chapter one. So there's nothing for you to skim through or anything like that. On future chapters, there will be stuff that I won't be covering. And I'll tell you when that happens. Okay. And you'll notice when it happens. Okay. If you want to get ahead, start reading chapter two. Chapter one is pretty basic. Chapter two then starts getting more advanced. Homework. Do the problems in chapter one, all of the odd problems and also all of the asterisk problems. And let me add that to this. So all the problems that have an asterisk are available. The answers to all the problems of an asterisk are available online. So I'd like you to do those as well. Okay. And then in addition, we'll be posting a worksheet number one on the website. It's not there yet, but it'll be posted very, oh, it is there? It'll be posted afterwards. Okay. So we'll be posting that. That will form the basis for the discussion sections. Please work the worksheet as well. Okay. So before I get started, before I go through very much more, I want to tell you what you should be paying attention towards. The first thing are these announcements that I'm giving you. What's discussed in lecture? The discussions that I give you in lecture are your guide to what I think is important. Okay. So right before the midterm, you're going to want to know, what do I need to know on the midterm to get an a in this class? And my answer is always the same, which is what did I talk about in lecture? What I talk about in lecture is what I think is important. I have a limited amount of time for these lectures. I'll be doing two lectures per chapter of an hour and 20 minutes each. And so if I talk about it in lecture, I'm telling you, I think this is important. This is something you need to know for the midterm. Okay. So what's discussed in lecture is super important. This includes both slides and anything else that's posted to the website. This discussion worksheets and then the discussion in discussion as well. If you're sitting on the left side of the classroom, can I ask you to sort of scoot in if you have an empty chair on your right? So just to create some more extra chairs because we have people that are arriving late. So just sort of scoot over please. Thank you. Okay. The next most important thing is the sign reading. But filter the sign reading through the filter, through the lens of what I talk about in class. If I talk about it in class that's telling you it's important, if I don't talk about it, less important. And then finally, the problems in the textbook as least important. Good news. There's a few things that you don't have to worry about. The first of these are references on the slides. I find it almost impossible to do stuff without having some referral back to the literature. That's sort of the nature of scholarship. And it's totally impossible to get me to stop doing this. When we, when Dave and I wrote the textbook for example, we had a list of references that's like 10 times longer than the one that's posted to the website. And we found it totally impossible, though the publisher told us to stop doing it, to leave out those references. And so references are basically the currency that underpins what I'm telling you. But on the other hand, this is an introductory class, so don't get worried about those. Okay. If you take a graduate class and they have references on slides, you'll want to look up those references. But at an undergraduate level, don't get worked up about it. Okay. So don't stress about those. In addition, don't stress about stuff that's covered in the textbook that we don't discuss in class. Okay. So if I, you know, I've said this before, if I don't discuss it in class and it's in the textbook, don't worry about it. Okay. So the text is written as sort of an advanced undergraduate, early graduate level. And there's material in there that's frankly graduate level. But I don't want you to get stressed out about it. Okay. So if I don't talk about it in class, that's my signal that I don't think it's so important for you to learn. Okay. Any questions about what I'm telling you? Hey. Are there any textbooks reserved in the library? Oh, that's a good question. What is your name? David. Miriam, could you look into that for David? No, they're not yet, but they are ordering them. So as soon as they get them, there should be some in the next two weeks. Okay. So they'll be eventually they'll appear there, but not yet. Thank you. Okay. Thanks for asking. Another question. What is your name? It won't be collecting. It will just be working. No. So we will not be collecting the problem sets. We'll have plenty of other chances to learn about your intelligence and creativity. So another question? Will the slides be posted ahead of time before? Slides will be posted. Like before class? That's a good question. I'll try, but I'm usually frantically getting ready the day of. So I'll do my best. Certainly the Thursday lecture will be, but maybe not the Tuesday. I'll do my very best though. Other questions? Okay. More background. Course instructors, Professor Weiss. I've been teaching this class for about 12 years. And I absolutely love chemical biology. It's what makes me run to work. It is my sole passion in life. That's a little bit of an exaggeration, but close. Okay. So what else would you like to know about me? Here's your chance. For the next five minutes, you can ask me anything you want. Personal, not so personal. Go ahead in the back first. Is there a question right now? So my laboratory is at the interface of chemistry and biology. And we're trying to develop new ways of looking at individual molecules and dissecting how membrane proteins work. Thanks for asking. And a question over here? Is that your format on the screen? It was. I'm kind of a competitive guy. I like driving fast. I like racing. So yeah. Question over here? What's the difference between biochemistry and chemical biology? Chemical biology emphasized. So this is a great question. So the question was what is the difference between biochemistry and chemical biology? Chemical biology emphasizes what's happening at the level of atoms and bonds. And biochemistry emphasizes what's happening at a larger scale. So in biochemistry, my colleagues are content to look at proteins as sort of large molecules without getting too worked up about hydrogen bond here and hydrogen bond there. Sometimes they get worked up about those things. But most of the time the diagrams, signal transduction diagrams and things like that are just large blobs. And in this class we'll be zooming in and looking at the actual atoms and bonds. Okay. Good question. Okay. Anything else? Personal? This is your last chance. Ask me anything personal. Ask me about my pets, my hobbies. Go ahead. Do you have a little space? So do you go to those springs a lot? No. I wish I did. I only get to go out once a year. It's kind of a limitation. No. Thanks for asking. Okay. Well, I should also let you know I have two cats. I'm married. And that's it for the personal information. Okay? Okay. Last question. Go ahead. I have zero kids. That's why I have a two-seater car. Okay, you guys. That's it on the personal stuff. Enough about me. I'm very pleased that this quarter we have really the very best TAs in the chemistry department. I've gone through and I've handpicked TAs. Miriam Iftekar is a great example. It's Miriam and I taught this class last year. And she knows everything. There is to know about this topic. Her research is in chemical biology. And she's absolutely superb. If she tells you something about the class, you can take that as good as coming from me. Okay? In addition, our second TA, Krithika Mohan, isn't here today. She's been tied up in India. But she'll be back in the next week or so. And she's also a great source of information. She's also a graduate student in my laboratory. Okay? So we're really lucky to have California's Phytos Natural Resource TAing for us, Krithika and Miriam. Okay. So in terms of office hours, I will be having two office hours a week. My Thursday office hour is set. My Wednesday office hour, however, will float. Okay? So I will always have office hours Thursday, 11 to noon. The one, this other office hour, the second office hour will float, meaning that my schedule is constantly changing and so I'll have to change this around. Okay? So every week I will announce when that office hour will take place. If, for example, my office hours don't fit your schedule, tell me at the beginning of the week when you'd like my office hour to be and I'll do my best to accommodate as many people as possible each week. Okay? So first office hour fixed, second office hour floating. I will always have the office hours set up in a way that's at the interfaces between classes. So you don't have to attend the whole office hour. If you can attend just the first 15 minutes or so or 10 minutes and then fly off to your class, that's perfectly okay. Show up for five minutes, get your question answered and then disappear. I don't care. I don't mind. But I'll always set them up so they're kind of at the junction between classes. That way then it's less likely that you'll be able to tell me that you have a scheduling conflict with every one of my office hours. I've heard that before and I usually ask those people to show me their schedule classes. And I've never seen it actually that way, especially since I have the second office hour floating. So there's going to be plenty of time for you to meet me this quarter. And in fact, I really want to get to know you. Okay? I will get to know the names of 95% of you in this room. I will know something about what your career aspirations are. I will know something about your creativity in terms of your ability to come up with novel ideas, your writing ability, and a lot of other characteristics as well. So at the end of this, I will be able to write a very good letter of recommendation for you. Hmm. Okay. This is not apropos of the last topic, but I would like you to shut off your cell phones, please. Okay? And that also includes text messaging as well. Thank you. Okay. So anyway, come out to my office hours, especially in the first couple of weeks. Introduce yourself. Tell me why it is you're taking this class. What it is you hope to learn. What it is that you're hoping to do once you graduate from UC Irvine. And if there's anything I can do to help you in that course, I will do it. Okay? That's one of my jobs. And furthermore, even after you graduate from this class, you can still keep in touch with me. You can still get letters of recommendation from me. And you can still have my support in your career aspirations. Okay? That's my promise and commitment to you. Okay. And the TAs will also have office hours each week. Their office hours will always be on different days and times than my office hour. And their office hours are much more fixed than my office hour. Okay? So any questions about anything I've said? Any of the announcements so far? Okay. All right. Textbook, I've already mentioned this. Again, it's available on Amazon. I understand it's sold out. But you can get it again from Amazon. Supplemental text. I'd like you to have available an organic chemistry supplemental text. When I talk about peptides, for example, and I talk about amide bonds, I'm going to assume that you've read the chapter on amide bonds and peptides in this supplemental text, even if it wasn't covered in 51c. Okay? I'll just ask you to go back and read that chapter. Okay? And so you need some sort of supplemental text available in organic chemistry as basically as reference. Okay? And it's nice because this will provide kind of a lower key treatment of a more complex topic. So for example, if you want to learn the sort of the very fundamentals of DNA or carbohydrate chemistry, the best place to start is whatever textbook you use for 51c. Now I realize many of you sold your textbook right after the class was over. That was a huge mistake. But it's not too late to change things. Number one, I can give you or loan you a supplemental text if necessary, come to my office hours. First, five people that show up will get one of those. Second, the library, the science library has about three shelves that are like this wide that are filled with organic chemistry text. The exact text does not matter. Okay? Basically, if you look at sophomore organic chemistry textbooks, they're all more or less the same. Okay? What really matters though is that you have one available to you that you can refer to as reference. You need that for this class. Okay? Because I'm going to assume that you know the material in there. Now along those lines, I've gotten a couple of emails from some of you who are concerned. You had trouble in 51c. You had trouble in sophomore organic chemistry. And now you're taking this sort of advanced organic chemistry class and you're worried. Okay? Here's what I want you to do. First, don't panic. Okay? I will do my best to get you up to speed on arrow pushing and some other fundamental principles in the next two weeks. Okay? So don't panic yet. At the end of that two weeks, if one I'm doing on the board and your ability to keep up a discussion section and on the homework are just, you know, apples and oranges, you know, fields apart. Okay? You're not even on the same racetrack. Then you can start panicking. But for now, no panicking. Okay? If you were really, really weak in sophomore organic chemistry, I'd like you to open the chapters on carbonyl chemistry. Whatever book it is, reread the chapters on carbonyl chemistry and get up to speed on those. If you understand how carbonyl's react, how the alpha carbon is acidic, and a few other things, you'll be fine in this class. Okay? It turns out that's like 60 or 70% of the organic chemistry that underlies biology involves carbonyl's. Okay? So start there first. After you finish with the carbonyl's, come see me again and I'll get you up, I'll give you the next topic, which will probably be a means or something like that. Okay? Sound good? Okay. So hopefully I've elayed some of your fears. Don't panic yet, but get ready to panic in the next week or so. And also get ready to take your game up a notch. Okay? So that, you know, even if you had a bad time in 51C, you can do pretty well in this class if you're ready to work pretty hard, you know, do lots of problems, come up with creative ideas, et cetera. Okay? Discussion sections. These are mandatory. This is especially important if you're weak in organic chemistry. Discussion sections are going to be run in a problem-solving format, and this is your chance to show that you could do arrow pushing with the best of them. So a lot of the problems in this class involve mechanisms. And so in discussion sections, you'll have a chance to demonstrate your ability to do mechanisms. You'll get up to speed on doing these correctly, et cetera. Okay? So again, the first worksheet will be posted shortly. The first discussion section will start this Wednesday. Miriam will be teaching that one. And then after that it will continue. Okay? Now if you were on a Monday, if you were scheduled for a Monday discussion section, don't panic. The material that will be covered on Wednesday will then be covered on the next Monday. Okay? So we'll have them slightly staggered throughout the class. Okay? And it turns out that actually works out fine because the midterms are on a Thursday and a Tuesday. Okay? So there will be two midterms in this class. And there are no make-up exams available. They will consist of the full hour and 20 minutes. There's going to be an emphasis on arrow pushing and concept problems. There will be things like short answer. There will be no multiple choice. There's going to be like short essay type problems. There will be problems where you have to design experiments, things like that. Okay? But lots and lots of arrow pushing. So get ready for arrow pushing. In addition, the other way that I'm going to assign your grade is I'll be looking at two written reports that you're going to submit in the class. The first of these is a journal article report due, unfortunately, on Valentine's Day. Happy Valentine's Day from your chemical biology friends. And in this one, in this report, you're basically going to be doing the equivalent of a book report but using an article from the primary literature to provide the report. I've already posted to the website an example of this. In addition, instead of a final exam, this class will have a mandatory proposal that's due on the last day of class, March 14th. Okay? So that's a mandatory proposal. You cannot pass this class without turning in the proposal. But there's no final exam. Yay. The proposal will consist of an original idea in chemical biology. Now I know this is daunting. I've taught this class before. I know this is really intimidating. Don't panic. I will have a series of exercises for you this quarter that will get you up to the point where you're ready to come up with creative novel ideas in the cutting edge of chemical biology. So you will be ready for this. You'll be ready to contribute. And the good news is in chemical biology there's so much that we don't know that there's lots of room for smart people like yourself to come up with really great new ideas. And I see this every year. Every year I would take the very top proposals from this class and I could present them to the National Institutes of Health and they would get funded. Okay? The best ideas I can put up for faculty ideas anywhere. Okay? So I've seen that before. And the other thing is I'm looking for a small idea. I'm not looking for, you know, the Nexman-Haddon project or something like that. I'm just looking for, just give me a base hit. You know, something that will work, that will teach us something new about chemical biology and you're good. Okay? Quizzes. I will have a series of quizzes in this class that will number between one and five. Okay? And more likely to be one to two. There will definitely be a quiz sometime in that last week and the reason is our second midterm is in February and the class keeps going till March. Okay? So there will be an easy quiz. The quizzes in general are designed to be easy. They're basically, you know, recapitulate something that you just saw on the board. Okay? So we'll run these either at the beginning of the class or the end of the class and it'll be something along the lines of you just saw this mechanism. Show me again how it works. Okay? Something like that. Just basically tells me whether or not you're paying attention and who's showing up for class. And by the way, I'm delighted to see all of you happy people out this morning. Welcome. But I know as the class wears on that you guys get very busy and of course the lectures will be posted online. There has to be some incentive here to get you rolled out of bed at 9.30 in the morning. Okay? So we will have some quizzes. It won't be too many and they won't be hard. Okay? That I promise you. In terms of percent of your grade, those quizzes only count for 5% the same level of participation. Participation counts in both lecture and discussion and for that matter even office hours. Okay? So me and Miriam and Krithika getting to know you, that's how we determine the quiz scores or the participation scores. Oh, and by the way, I will post all of these slides online. Okay? So they'll be all posted to the website. So you'll have copies of them. They're not posted now but they'll be posted shortly. Okay? Each midterm will count for 22% of your total grade. The journal article report will count for 16%. And then the proposal which is in place of the final exam counts for 30% of your grade. Okay? So it's a pretty even distribution. There's lots of opportunities for you to get feedback, etc. Any questions so far? Yeah. And what is your name? Anna. Anna. I'm sure that you were talking about before. Is that for the final proposal? It is. I haven't talked about that yet. Thanks for anticipating. I'll get to that in just a moment. Okay? Thanks for asking. And, Steve? No. What is your name? Carl. Carl. Okay. Carl. Can we close the discussion section? Yeah. No problem. Carl's question is what if I'm assigned to some discussion section that doesn't fit my schedule? Can I go to another one? No problem. And you can even go to one one week and a different one the next week. No problem. Okay? And it's posted online. It's posted on the syllabus exactly when the discussion sections will take place. Let me show you that. Okay. So this is the course website. Okay? Notice over here that there are instructions for the book report. I'll change this very slightly for 2013. There are instructions for the proposal. I'll change this very slightly. There are three examples of proposals that got an A and then the syllabus. Okay? In the syllabus, I've listed the discussion sections where they meet, et cetera. Feel free to go to any of these. Okay? Let me zoom through this. This is online. I'd like you to read this carefully. I'm going to hold you to all of the provisions that are in here. Okay? So anything that's written in here, it's the equivalent of me saying it. All right. I'm not sure exactly why it is as we cut off on the right. A lot of this recapitulates what I just said. Okay. Let's get to this Anna's question. Over here, there will be, let's see, one moment. Okay. On February 21st, 2013, you will turn in an abstract for your proposal. Okay? So an abstract is a short condensate of what your proposal is going to consist of. This tells me whether or not you're on track. And I'm going to use this as a way to give you early feedback about your idea and tell you whether or not I think your idea fits the definition of chemical biology, whether or not I think your idea is a creative one or not so creative. Okay? So this gives me a chance to give you feedback before you turn in your proposal. Okay? And this abstract is worth 10% of the points for the proposal assignment. Okay? So in other words, 3% of your course grade will be determined by that abstract. Okay? Note that all assignments are due by 11 a.m. on the due date. There is a late policy, but I hope that doesn't apply to you. Question so far? All right. Yeah. No. Just stretching. All right. There's some information here about ads and drops. There's a frequently asked question section. Do I need to attend discussion sections? Yes. Discussing paper, turn in the final assignment. Oh. If you have not taken all three quarters of Chem 51 or two semesters of sophomore organic chemistry, you should drop the class. Okay? You're going to get blown out of the water. Okay? So you must drop the class now. It's a prerequisite and then every year someone slips through. Don't take this class if you haven't taken the full sophomore organic chemistry series. Okay? Okay. There's a whole thing on incompletes over here. Oh. Academic honesty. Unfortunately, we're going to talk about this later in the class. I do not want it to apply to you. The major portion of your grade is going to be writing assignments. And so academic integrity issues loom large, unfortunately, in this class. Every year I have to give someone a F grade on the assignment, which end up turning into like a C minus D plus kind of deal because they try to plagiarize assignment. Don't let that be you. Let's make this the year where I don't have this problem. Along those lines, if this is the year where I don't have any plagiarism problems, I will give an additional 3% higher grades. So I'll assign the grades and then I'll go through and I'll bump up 3% of the course grades to the next higher grade. Okay? So if everyone in the class avoids having any plagiarism or academic honesty issues, so no cheating on the exams, no plagiarism, no academic honesty, I will bump up the grades by 3%. Okay? That means four or five of you at each level are going to get a higher grade. Okay? So that means like four people, three or four people who are going to get a B plus, I'll move them up to A minus. I'll take the three or four top A minuses and move them up to an A. Okay? That's the deal. Okay? We'll talk some more about this because it's a slippery slope. And it's best that we don't have to have this conversation later. Okay. So anyway, that's the information on the syllabus. I'm holding you entirely to the contents of that syllabus. So I'm expecting you to go home and read the syllabus carefully. I don't have time to talk about every aspect of it now. I'd like you to go home though and read it carefully please. Okay? Questions? Okay. Skip that. Skip that. Okay. Let's get started. So we already heard the question, what is chemical biology? How does it differ from biochemistry? I gave you kind of a quick answer. I want to delve into this topic a little bit further. Okay? So here's the working definition of chemical biology that we'll be using this quarter. And it's important that you understand this. This is the definition is using chemistry to advance a molecular understanding of biology at the level of atoms and bonds. So the way I know that we're talking at the molecular level is if we're talking about atoms and bonds. Okay? And that's what I'm looking for in terms of a definition of chemical biology. There is a second corollary to this definition, which is using techniques from biology to advance chemistry. And some examples of this are, for example, using molecular biology techniques to develop combinatorial libraries of chemicals, which is something that is one of the projects that my own laboratory does. Okay? So there are two parts to this, using techniques from chemistry to study biology or using techniques from biology to solve problems in chemistry. In both cases, these involve looking at molecules at the level of atoms and bonds. And that's where it's distinct from biochemistry. Biochemistry also uses techniques from chemistry, but oftentimes there are content with looking at molecules as sort of amorphous blobs that are represented as, you know, spheres or something like that in textbooks. And this class will be down at the level of atoms and bonds. And that's how you know we'll be talking about chemical biology. So later in the class, when I asked you to come up with an idea in chemical biology, a proposal idea, then you should be thinking at the level of atoms and bonds. And then it tells you whether or not your idea will be acceptable. Okay. So chemical biology advances both chemistry and biology. And I wanted to give you a couple of historical examples of this. For my money, the very first chemical biologist was Joseph Priestly, this guy over here. He was a remarkable character. So he isolated oxygen and other gases. Okay. So he was isolating these using electrolysis and other techniques. And he would isolate these in bell jars. And then he'd use these chemicals to study biology. So one of the experiments he did, for example, was subjecting poor mice, mice that he would trap from fields to these different chemicals that he was isolating. And he found that the mice, for example, can live in oxygen but could not live in many of the other gases that he was isolating. Okay. So that's a really interesting example because it's using the very latest techniques from chemistry to understand better how respiration works, how organisms take in oxygen. And at the same time, it's using a technique from biology as a way of solving a problem in chemistry. And the technique in biology is does the mouse live or die? Does the organism, can the organism survive under these conditions to tell me something about those chemicals? Right? Joseph Priestly didn't have any spectroscopy available to him. So he's using a technique from biology, a very qualitative technique to be sure, but a method nonetheless to tell him something about what's happening at the chemical level. Okay. Now, Sir Joseph Priestly had some radical ideas about colonists in America and theological dissents that were going on in England at the time. And I like to say that the very first chemical biologist had his house burned by an angry mob who came rampaging through his village with pitchforks and were out literally to get his head. And we've had a proud tradition ever since of iconoclastic thinkers and independent people who are guaranteed to rile up the masses. But of course, he's not getting burned at the, or his house is not getting burned because of his chemical virtues. This was then carried on by Sir Humphrey Davy who's shown here at the Royal Society of Chemistry conducting experiments on his colleagues. He's having them inhale bags made out of silk that include gases. And then he's looking at the violent excretions that happened afterwards. And so this is just a classic woodcut from the period. Okay. Now, the other, so these are sort of early workers, perhaps the most, historically, the most important experiment in chemical biology was done by the great Frederick Voler in 1828. Here's a picture of him. Notice that these guys are pretty young. Okay. These guys, you know, they were doing this stuff in their 20s. Okay. They're not much older than you. Any of you in this classroom, five years from now, you could also be doing stuff that would change how we think about the universe. Okay. That's the way science works. It's one of the great things about science. Okay. So don't think about this as being done only by old people. It's not. It's done, these great ideas are oftentimes done by young iconoclasts who have clever ideas and just want to push the bounds. Okay. So here's Frederick Voler, 1828. He's running an experiment in his laboratory where he's running this silver cyanate experiment where he's trying to do what would be like just the most pedestrian of exchanges of salts. Okay. So what he's trying to do is synthesize ammonium cyanate using silver chloride which he knows will precipitate out. Recall from chem one that precipitates out in a white powder. And he's doing this by simply mixing silver cyanate together with ammonium chloride. And he's expecting when he heats us up that the silver chloride will precipitate out and he'll be left with ammonium cyanate. It turns out that's not what he got. Okay. That was not the product that occurred. Instead, what happened was he got out this other product that crystallized out of the reaction flask. And when he smelled this other product, he knew immediately what it was. What he smelled was urea. And urea had been isolated from urine, from dogs and humans. And so it was known that urea is a known compound. And back then the primary way of characterizing the chemicals was by their smell, by their taste, you know, some gross physical properties. And because urea has a distinctive smell, he can readily characterize this. Now, here's the significance of this discovery. What Frederick Vohler recognized was that this urea was identical to the urea that's attained from dogs and from humans. But the difference is this did not come from a living organism. In other words, using just mineral sources, you can make the same chemicals that are found in living organisms. So there's not some sort of special property that animates the chemistry of living organisms that somehow makes it special. Instead, it's going to be governed by the same rules that are found in chemistry that's outside living organisms. Okay? And this is really important because at the time there was this notion that living organisms would have some sort of special spark that in some way would make them alive and make them, make their chemistry unique and special. And what Vohler is showing us by this experiment is that in fact there was nothing unique and special about the chemistry inside living organisms. Okay. So these are great examples of using chemistry to understand biology at the level of atoms and bonds in the case of urea. Let's move on. Another principle that underlies chemical biology is evolution. We're going to be talking a lot about evolution in this class. And so the reason we're going to be doing this is first, it simplifies knowledge and second, it's going to guide experimental design. And here's two views of the great Charles Darwin. We can't talk about evolution without making reference to Charles Darwin who articulated in, you know, 150 years ago much, you know, the principles behind evolution. There are two steps to evolution. The first step is to diversify, to generate a diverse population of molecules, of organisms, of phenotypes really. And then the second step is to select for the fittest from this diverse population. I'll explain the word phenotype in a moment. Don't panic if you didn't understand that word. So there's simply two steps here. Select for, for generate diversity, select for fittest. These steps are then repeated again and again to evolve organisms that can solve some sort of problem. In terms of chemical biology, we think about generating diverse populations as ways of shuffling together, shuffling around bioligamers in combinatorial manner, in combinatorial manners. And I'll show you that in a moment. And we often do experiments that involve some selection for fitness where we're going to make a large population of molecules, mix them up and pick out the ones that are, that can best fit a criteria or set of conditions. This is a very powerful principle that allows us to make progress very quickly in chemical biology. And this is used as a technique by hundreds of laboratories in the field. Okay? So we use evolution not just as some sort of theoretical underpinning, but we also use this as an experimental framework. And I encourage you, when you're thinking about proposal ideas, think about evolution as a tool to help you speed up getting towards molecules that do stuff for you. Okay? So this is used extensively. Another way that's used extensively is it's used to organize knowledge. When we talk about, say, the ribosome, which is the machine that translates mRNA into proteins, and I'll show you what that looks like in a moment, I don't have to talk to you about some sort of special ribosome that's found exclusively in humans or dogs or something like that. Because it turns out that the same mechanism used by ribosomes and humans is also used by bacteria. It's even the same mechanism used by archaebacteria, a different stem on the tree of life entirely. And so what this means then is that I don't have to teach you about the special chemistry of humans. I can talk about the chemistry that underlies all organisms on the planet because we all evolved from common ancestors that solve these mechanistic problems in chemical biology. Okay? So this provides the powerful approach to evolve molecules, which I alluded to on the previous slide, but equally importantly, this helps us to organize knowledge and make it much simpler for us to talk about universal chemical mechanisms that underlie all life on the planet. Okay. So speaking of sort of universal principles that underlie all life on the planet, the central dogma of modern biology is going to appear in multiple ways throughout this quarter. In the first way, this is how we've organized the textbook that we'll be using this quarter. Okay? So the textbook has different chapters and it's organized according to the central dogma. So the central dogma describes all biosynthesis that takes place in cells and on the planet. Okay? So everything that you're going to synthesize in your cells is in some way encoded by the central dogma. The central dogma tells us that the DNA found in nuclei in eukaryotic cells is the blueprint upon which all biosynthesis is based. This DNA is transcribed into RNA and then translated into proteins. Okay? So this is the earliest diagram by the great Francis Crypt who recognized the far-reaching implications of this dogma very early on. Okay? This is his earliest example of where it was articulated. It looked just like this. We now know, for example, that there is in fact this dash line over here is in fact a real line. There is an enzyme, reverse transcriptase, that can convert RNA into DNA. But this line over here where RNA is used as a template to make new copies of itself, this line never materialized. We have not found it in many years of looking. In fact, it would be a great chemical biology proposal to come up with a way of doing that. Okay? So here is a different way of looking at the central dogma of modern biology. So at the very top DNA, this biopolymer up here is going to encode messenger RNA and in fact all RNAs. This, the conversion of DNA into the complementary RNA takes place using an enzyme called RNA polymerase. Okay? This is nice because it's going to be polymerizing RNA. This makes sense. I'm going to be referring to enzymes today and in future classes enzymes are proteins that catalyze chemical transformations. Okay? So these lower the transition state energy for key reactions that take place in the cell. And here's our first example of this, the enzyme RNA polymerase that's responsible for transcription. In addition, there's an enzyme DNA polymerase that allows replication of the DNA to make new copies of the DNA when the cell has to divide. Okay? Here's the ribosome that I alluded to earlier on a previous slide that is responsible for translation of RNA into proteins. This central dogma continues as proteins then can catalyze reactions that lead to other bioalligomers that are going to be very important in this class. For example, we're going to see a class of bioalligomers called terpenes that are used in used by plants and microorganisms for signaling, polyketides, a class of molecules that's very important as natural products for antibiotics and other pharmaceutical uses. And then oligosaccharides, the glycans that decorate the surfaces of your cells and play key roles in protein folding and key roles in cell-based signaling. Okay? So here's my plan for this quarter. We're going to have two lectures about each of the bioalligomers that's depicted here. Okay? So next week I'll give you, I'll talk two lectures about arrow pushing. The week three will have two lectures about DNA. Week four, two lectures about RNA. Week five, two lectures about proteins. Week six, oligosaccharides. Week seven, polyketides. Eight is terpenes. Oh, actually I'm sorry, I'll have four lectures total about proteins. I can't resist, I'm a protein guy. So yeah, so I'll have a total of four lectures about proteins, but everything else we'll have two lectures about and we'll be covering a chapter or a week in the class. Okay? So necessarily some of the material, the textbook will be left aside. Okay? Everyone's still with me so far? Okay. So I told you that everything that's synthesized in the cell is synthesized in a deterministic way starting with the DNA up here. And it turns out that's not strictly, strictly true. And I want to explore a little bit more about what the subtleties of this concept. So first of all, we need to define what is the unit of synthesis. So proteins and DNA, sorry, DNA is read out in units called genes. Okay? Where each gene is going to coat a single protein. Genes have two essential parts, an on-off switch and an express sequence. The on-off switch is where transcription factors bind. These are proteins that can encourage RNA polymerase to bind to the start of this gene and encourage it to start transcription. Okay? Similarly, there's other, if there's promoters, there's also other ways of shutting off the synthesis as well, it gets complicated. This transcribe region then becomes the messenger RNA which is then translated by the ribosome into the protein down here. Okay? So here's an example for a transcription factor, binding to DNA. Notice that the DNA has a structure that can nicely accommodate the structure of this protein. I'm going to be talking a lot more about proteins later, but I want to tell you about a convention that we're going to be using. Okay? So proteins, hopefully as you know, are composed of amino acids that are strung together by amide bonds. Okay? If what I told you totally doesn't make sense, read, go back and read the reference supplemental organic chemistry text. Okay? So when we look at these amino acids and we just look at the amide bonds and the carbon that's alpha to that amide bond, we can trace out that backbone using these ribbon structures. So these ribbon structures do not look at the side chain of the amino acid, rather they simply trace out the sort of the scaffolding backbone of the protein. Okay? So that's what these ribbon diagrams will look like. And then here's a structure of DNA down here. Notice that this alpha helical ribbon, this curly key ribbon fits neatly into the DNA's major groove. We'll talk much more about that later. Okay? Let's take a look at the world's smallest gene. This is the Guinness Book of World Records for smallest gene. In this case, this gene encodes for microcin C7 or the gene that the protein will encode for is called microcin. Microcin is a translation inhibitor. It's a protein, well it's a peptide, a short piece of protein called a peptide that's used by microorganisms to kill off their neighbors. Okay? So the microorganisms that grow on your skin, that grow in the, you know, far recesses of the walls, you know, that grow all around you are constantly fighting chemical warfare with each other. Okay? Their goals are to kill off their neighbors and then give themselves more resources that allow them to grow better. Okay? To grow faster and to be more populace. Okay? And microcin is a good example of one of those antibiotics or compounds that kill other organisms. Okay? And this is actually a very complicated binary toxin. On the one hand, there's this peptide over here that allows the microcin to be transported into the competing bacteria. Okay? So the bacteria look at this complicated thing, they sniff at the peptide region and think, oh, that peptide looks yummy. And if I eat that, I'll get amino acids as a source of building blocks for my own proteins. Okay? That's kind of like the bait. Okay? So the competitor picks up the bait, transports microcin, microcin C7 into it, into itself, and in which case enzymes in the competitor then break apart this peptide and then unveil the translation inhibitor down here that shuts down translation by the ribosome. This is very bad news for the competitor, right? If the competitor organism, microorganism cannot translate mRNA into proteins, it cannot live, it cannot divide. It will die very quickly. Okay? And so in the end, what we're seeing is that this smallest gene is rather complex. Its toxic fragment is highlighted over here, and the rest of it also plays a key role as well. Okay? So this, to make something as complicated as this, requires a large number of genes that are lined up over here where each one of these arrows represents a sequence of DNA. Okay? And we'll talk more about the directionality of the arrows. Later, week three, for now, don't get too worked up about it. Notice though that it takes several genes to compose this toxin. Okay? So some of these genes are doing things like adding on this non-peptide-like toxic fragment. Okay? So some of these genes up here are encoding various enzymes. Okay? So that's this microsin, this MCCB, MCCD, MCCE enzymes. So these enzymes are adding on stuff and modifying the peptide that was otherwise encoded by MCCC in the center over here. Okay? Or sorry, MCCA that was encoded up here. Now at the end of this, even though this is the world's smallest, you know, smallest gene, delivering a tiny little peptide, the resultant peptide is still fiendishly complex. Okay? This thing includes a large number of different stereocenters indicated by the dashes and the wedges. And furthermore, this isn't the half of it. Right? This is a very simple example. The proteins we'll be talking about, the proteins I've been showing you today, for example, the transcription factor consists of hundreds of subunits, hundreds of amino acids, each one likely with its own stereocenter. And so the chemical biology considerations become enormous when we start looking at this in greater detail. Okay? All right. So we've looked at a gene. Let's talk next about the collection of genes. All of the genes together that are found in an organism are referred to as a genome. Here's one representation of the genome of the bacteria model system, bacteria called E. coli. We'll be talking a lot about E. coli. I'll have another slide about it in a moment. This is used extensively in chemical biology laboratories including mine. And its genome looks like this, where in this representation, it's shown as a circle and each one of these colored bars tells us something about the size of the gene, whether or not it's GC, whether it's GC richness, et cetera. Okay? So reading out the information here, not so important, suffice it to say that the human genome has around 24,000 or so genes. And when you compare that against almost any other machine that we have around us, this number sounds ridiculously small. One of the challenges, however, is even though we have this complete parts list for simple organisms like E. coli, it's not clear what each one of these parts is doing. And so a goal, a functional genomics and a goal for that matter of chemical biology is to try to make better sense of these parts lists. Okay? And let me show you what I mean on the next slide. Okay? Let's imagine that you had a transmission from a car. Okay? And imagine that you had parts lists. Of all the different gears found in that transmission. Okay? I can tell you from some experience that just staring at those different gears, even, you know, staring as hard as you possibly can and using your best, you know, sort of logical reasoning, you're going to have a really, really hard time trying to put together each one of those little gears. Okay? I don't care how smart you are. It's really a hard problem. And so we have that same problem when we look at genomes. When we look at genomes, it's not clear what each one of these parts are doing. And one of the roles of chemical biology is to help us annotate genomes and teach us about what each one of those parts is doing in terms of the larger machine. We'll talk some more about that. That will be a topic called functional genomics. Okay? So chemical biology helps us fill in the dynamics of the process and how these pieces fit together. Okay? So one way that it fills in dynamics, dynamics means change over time is an important area of chemical biology develops new tools that allow us to see molecules at the single molecule level and understand how they change over time, how they dynamically interconverted to different speeds and things like that. And Miriam is one of the world's experts at this. She can tell you more about this. Now, another big challenge that we have is that oftentimes we have big differences in genomes that lead to the same species. Here, for example, are three different strains of the model bacteria E. coli. Okay? So here's three different strains and only 40% of proteins are shared between these three. Notice that they look identical. They're all the same species because they can mate. They can exchange DNA with each other, which in terms of bacteria, it turns out is not necessarily the same as being same species. But in any case, these are named, all named E. coli, yet they have vast differences in what DNA they've picked up from their environment and from other microorganisms. So simply knowing the parts list is not going to be enough for us to explain what's similar and different between these organisms. Okay? And for that matter, when we start looking at different, when we start looking at different organisms from the same population, we see a similar sort of diversity despite having very, very, very similar genomes. Okay. So I've been talking to you both about humans and also bacteria. I need to hopefully just very briefly review for you that the differences in those organisms are vast. Okay? I'm hopefully not telling you anything you don't already know. Bacteria are classified as prokaryotes, humans and other multi-celled organisms or organisms even that are single-cell that have multiple compartments in them are classified as eukaryotes. I'd like you to, I'll tell you that in a moment. The big difference here is that the prokaryotes don't have any compartments for the most part. The DNA is kind of organized into nucleolid, but for the most part there are no compartments inside of the cell of a prokaryote. Whereas when we look at eukaryotes under the microscope, we find something totally different. What we find is a bunch of organelles, which are these little compartments in here. Okay? And these organelles have different functions for the cell rather than being just a big bag that has all of the functions being carried out kind of randomly within that bag. Okay. Now, getting back to this idea of genomes, nearly identical genomes can lead to very different people. So even though our genomes are 99.9 percent identical, we see vast differences. So this is a challenging concept, but what's happening here is vast differences in transcription underlie these different phenotypes that are observed where a phenotype is the physical outcome of the gene. Okay? So all of us have roughly the same genomes, yet the phenotypes that come out differ at the cellular level by different transcription levels that program our cells into having different functions. So even though each one of these cells has the same genome, the cells end up having different functions by having different transcription levels of different sections of the different genes within the genome. And furthermore, at the organismal level, this plays out in other ways as well. Okay? Also at the level of transcription. Okay. So here's six different human cells, and you can see vast differences in their morphologies, their shapes, et cetera. And for that matter, I don't think I have to work hard to convince you that these have very different functions inside the organism, in this case humans. Okay. So I showed you briefly a prokaryotic cell over here. I'd like you to memorize all the structures, everything that's labeled here and labeled in the book, the textbook. Okay? You should memorize those structures. And along those same lines, I'd like you to memorize all the parts that are labeled in the textbook for a eukaryotic cell. Okay? So you should know basically the simple anatomy of a cell. Okay? Do you know its functions as well? The basic functions. If it's in the book, yeah. I'd like you to know. Okay. So we've looked at DNA. DNA gives us genes, which gives us genomes. Next section down on the central dogma is RNA. So from RNA, the complete collection of RNA transcripts in a cell tissue organism is called the transcriptome. Okay? So here's the DNA, the genome of the organism. Here's a bunch of RNA transcripts. And the number of copies of each one of these transcripts is controlled by transcription factors that I showed you earlier. Okay? That was the alpha helix fitting into the DNA. If that transcription factor is very effective at grabbing onto RNA polymerase, then you'll get more copies of the mRNA transcript being produced. Okay? So these more copies of the transcript being produced can give rise to very different phenotypes of the organism. So ultimately, a lot of the phenotypes that are observed are being driven by differences in transcription in addition to differences in the encoding DNA. Everyone still with me? Okay, things are going to get a little bizarre next. It turns out that the RNA that's encoded by DNA is further diversified by a process called RNA splicing. Okay? So RNA splicing takes the RNA that's encoded by the DNA and then sort of shuffles it around very subtly. Okay? And the results are a bunch of different mRNAs encoding potentially different proteins down here. Okay? And the results sometimes are dramatically differences in the resultant proteins. So these proteins, the consequences of this can be proteins that have very different function from the starting mRNA. So you can end up with two different proteins, splice variants of each other that are encoded by the same DNA that have different results inside the cell and different phenotypes. Okay? Now, there's going to be further diversity but just to organize things. So we've seen at the DNA level, the collection of all genes is called the genome. We've seen at the RNA level, the collection of all RNA transcripts is called the transcriptome. And then at the level of proteins, the collection of all proteins is called the proteome. Okay? There's sort of a neat organization to all of this. Okay? Now, what I'm showing you, I've already showed you this representation of the genome for E. coli. This is a way of representing the transcriptome using a technique called RNA microarrays. We'll talk about this more in week four. And then you can do a similar thing, make a big collection of all the different proteins found in the cell or organism or tissue and array these on microscopic slides as well. Okay? So all these techniques are ones that we'll talk about later in the class. Okay. So we've talked about how you can start with an RNA transcript. Oh, question over here. I just wonder for the RNAs, the message RNA, when you slice, does the any interest stay or all the interest get out? Okay. So what is your name? Ashley. Ashley. Okay. So Ashley's question is, what actually gets translated on the messenger RNA? And. Yes. And there's what? Interest stays on RNA. Yes. What actually gets translated into proteins from the messenger RNA? Okay. That's your question, right? No. No. The question was like, like for slicing, can we take out the interest, right? Yes. So we already get the extra. The exons. I wonder if there are any interest in the mRNA, the final product? Oh, okay. So your question is more subtle than that. Okay. So could I defer that until we get to week four, which is the RNA? Okay. Good question. It will get an answer. So other questions? Okay. So we've seen how splicing can start with transcripts and then add additional diversity. It turns out that proteins are also subject to diversification as well. So after the proteins are synthesized by the ribosome during translation, these are subject to further diversity in a couple of different ways. Okay. The first way is for the proteins to be modified chemically on their surface. And so one example of this is an elongation factor two. So this is post-translationally modified to produce this functionality up here called diphthymide. Okay. So the protein is enzymatically converted from having this emittazole functionality up here into having a diphthymide functionality. This is absolutely required for translation by this organism, organism being humans. Okay. So elongation factor two that's been post-translationally modified is required for translation to take place. However, the diphtheria toxin has a way of cleaving off this diphthymide. Okay. When that happens, that prevents protein translation from taking place. Okay. Diphtheria toxin, fascinating, it's an effective way of killing cells. What's important here though is this notion that even after the proteins are synthesized, they're further diversified by chemical reactions that take place on their surface. Because this takes place after translation, these are referred to as post-translational modifications. Okay. Post meaning after translation modifications, translational modifications. And this is really important. This means that we can start with say 24,000 or so genes in the genome, get, you know, say 50,000 or 60,000 different splice variants, get say 60,000 different proteins, and then further diversify those 60,000 different proteins into 200 or even more 1,000 different proteins. So in the end, although our genomes look relatively uncomplex at the level of 24,000 or so different parts, the true number, this vastly understates the true number of parts which is much, much larger due to reactions like this one. Okay. Furthermore, these proteins go off and catalyze other functions within the cell leading to further diversity. Okay. Everyone still with me on the post-translational modification? Let me show you what I mean. I refer to this as post-translational processes. So this is the process by which proteins catalyze as enzymes the production of other molecules, oligosaccharides, glycans, polyketides, and terpenes. Okay. So once the enzyme is made, it's just the start. After that, all kinds of other things take place. Okay. And this is proteins can be covalently altered by enzymes, okay. That's the modified proteins that I showed you on the previous slide. In addition, there are spontaneous processes that alter the surfaces of proteins. Okay. So for example, oxidation of proteins is sort of an unavoidable consequence of having a metabolism that's dependent upon oxidation, right, and producing oxidation products. So there are some strong oxidants that are produced by yourselves and those oxidants will come along and modify the surfaces of proteins spontaneously, okay, using thermodynamically accessible reactions. And so these are examples of post-translational modifications. In addition, proteins themselves will catalyze reactions that will synthesize these molecules down here, which again are part of the central dogma. They're bio-alligomers. Now, one thing I have to tell you is that while I told you that the central dogma in a deterministic way determines everything that's being synthesized by the cell, while it determines everything synthesized by the cell, it's not purely deterministic, okay. And there's an element of randomness to all of this, okay. And that's what I want to show on the next slide, okay. This is, we're going to have randomness in the sense that the central dogma will dictate the identity of enzymes and then these enzymes are going to go off and catalyze reactions that will not be determined by the DNA. That will be at some level a little bit randomized, okay. So one good example of this is the process of appending oligosaccharides to the surfaces of proteins. Okay, so R over here is meant to represent a protein and each one of these shapes is meant to represent a different carbohydrate, glycan that's going to be attached to the surface of the protein, okay. Now, the way this works is that each one of the enzymes that's going to do this attachment is encoded by some gene up here, encoded by the DNA, transcribed into messenger RNA, which in turn makes the protein, the enzyme that's going to catalyze bond formation to add this glycan onto the oligosaccharide, okay. What's less clear though is, you know, small variations in the resulting glycans down here. So for example, enzyme 2 makes this bond, if there's enough enzyme 2 around, maybe it makes another bond. Enzyme 11 makes this bond, but maybe if there's enough enzyme 11 around, maybe it makes another bond over here. So there's diversity in the resulting structures that are biosynthesized by the enzymes, okay. Furthermore, even though I'm lining up the enzymes in this order, the order of the genes in the genome is unrelated to the final product that results in this glycan on the surface of the protein, which eventually appears on the surface of the cell. So there is considerable heterogeneity in these post-translational processes, both in terms of modifications in the sense that some of these modifications are occurring spontaneously just through thermodynamically accessible reactions. And furthermore, when these post-translational processes are catalyzed by enzymes, there is considerable stochasm, randomness in terms of what the resulting structures will be. Okay, so this is one of these kind of mind-blowing concepts that we have to get comfortable with, okay, that we can't in a deterministic way know every single molecule in the cell to a precise level. Okay, everyone comfortable with that concept? Okay, don't look so mopey-eyed and downcast. At the end of this class, hopefully, you'll at least have a framework to understand it. Okay. Okay. So I want to switch gears now and talk about some other principles, different types of techniques that you need to know that are going to make our lives so much easier in understanding the experiments behind chemical biology. Okay, so earlier, I told you that an important principle in chemical biology or an important technique used extensively in chemical biology is to make large diversity, a large diversity of molecules and then sift through this diversity to find a few molecules that do something special. This is a technique of molecular evolution. It's used extensively in chemical biology. So there's going to be one equation in today's lecture that I need you to know and this is the equation that determines the diversity of a collection of molecules. That diversity, the number of oligomers that results is the number of subunits raised to the power of the length of the oligomer. Okay, and let me try to show you this in action. Okay, so let me turn on some lights here. Okay, so let's start with DNA. Let's make a big collection of DNA. So DNA consists of four bases, AC, G and T. Again, we'll talk some more about their chemical structure in a moment. Let's try to imagine then that we're going to make a collection of all possible tetramers. Okay, so number of possible DNA. Oh, let's make it pentamers. Okay, so the number of possible pentamers is going to be equal to the number of subunits raised to the length of the bio-oligomer. Okay, so the number of subunits is four. That's the number of bases. The raised to the power of five, that's because we're making pentamers. Okay, if we wanted to make, okay, so this is example of five of us, if we wanted to do ten of us, again, we'd have four raised to the tenth power. Okay, so this is a very simple equation, very, very useful. It can tell you very rapidly whether or not the experiment you propose is reasonable, right? If you propose something that's going to fill this room with DNA, probably not so reasonable, right? That's not practical. But if you propose something that you could fit in, say, a one-mil test tube, totally reasonable, or one-mil tube, epimnore tube, that would work. Okay, any questions about this formula? You ready to apply it? Okay, good. Okay, one of the great failings of teaching a class like this one is that the example problems that I'll do for you, where we apply an equation or whatever, inevitably are a lot easier than the ones that appear on the exam. And I apologize about that. That's kind of, that's part of pedagogy, I guess. Okay, now, it turns out that chemical biologists apply this to DNA, but they also apply it to much more complicated molecules. So, for example, we can do a combinatorial synthesis of a series of molecules that look like this. Okay, so we can do, we can set up a modular architecture to allow combinatorial synthesis that, in a way, similar to composing bioligamers will result in molecules that have modules that have been tethered together. Okay, so for example, this is a framework called a peptoid. Okay, and so instead of a peptide, where the peptide would have a side chain coming out on the alpha carbon over here, instead, this has side chains coming out on the nitrogens, you can very readily make a large combinatorial library of these peptoids and make a great diversity of numbers of structures using exactly the same formula that I showed on the previous slide to calculate the resultant diversity. Okay, and let me show you how that would work. If you have 20 subunits, so you have 20 different possible building blocks and you're going to make 3MERS, then you would have 20 to the 3, the power of 3, 20 raised to the third power would be the resultant diversity of that library, okay, where a library is a collection of diverse molecules, okay? So this idea of combinatorial diversity applies both at the level of shuffling around bioligamers and is applied in biology, but equally importantly, it's used as a principle that underlies chemical synthesis in chemical biology as well, including the chemical synthesis that you learned about back in 51C, okay, and we can get much more complicated and make libraries of benzodiazepines, which are shown here, and this is an important class of small molecules that's very commonly used in many different drugs. Okay, why don't we stop here? When we come back next time, we'll be talking about diversity in biology.
UCI Chem 128 Introduction to Chemical Biology (Winter 2013) Instructor: Gregory Weiss, Ph.D. Description: Introduction to the basic principles of chemical biology: structures and reactivity; chemical mechanisms of enzyme catalysis; chemistry of signaling, biosynthesis, and metabolic pathways. Index of Topics: 0:30:30 What is Chemical Biology? 0:42:01 The Central Dogma of Modern Biology 0:46:54 What is in a Gene? 0:53:31 What is a Genome? 1:00:33 Inside a Human Cell 1:09:58 Combinatorial Assembly Generates Diversity
10.5446/18856 (DOI)
Hello and welcome to this class. This will be pretty much, think of it as a classroom exercise so that for all of you who work with iOS applications, may learn a daily basis, maybe on a hobby basis. So I hope that I will show you guys some tricks that you can use up your sleeve when it comes to dissecting and analyzing an iOS binary. So the name is kind of cryptic but I hope that it will make sense and it will be clear by the end of this talk. So a very short intro for those who haven't seen my face yet. So that's my name. I've been working as a Panthester since the early iOS days. So I've been working with iOS apps since iOS 4.0 which dates back to as early as 2008, 2009. And my field of interest and field of research is focused on how to map and analyze objective C-based applications, which actually makes most of the iOS apps out there. So there are Swift apps which the technology was announced in iOS 8. But as things see now, it's not that well spread as Apple supposed it to be. So most of the time when we see Swift inlays in normal objective C applications, that's kind of easy to understand because most companies have their objective C code base ready by this time. And they obviously don't want to replace everything they had earlier and which worked fine for them. So this talk is not a fun talk about all day. It's not a fun talk about jailbreaking. And as far as I understand, I don't have any jailbreaks for iOS 9.0.2, which is the latest version as of yesterday. Instead, think this talk like an advanced course on objective-screwed drivers to help you guys find your ways more easily around iOS applications when it comes to actual pan testing. So this is like a collection of small ideas and hints. I came up or I read somewhere else and that helped me a lot when it came to testing iOS applications. And when it comes to actual penetration testing work, most of the time our worst enemy is time. So I hope you guys that you had some experience with professional penetration testing, you will all agree with me so that the biggest burden of this job is to finish everything on time. But unfortunately, the technology and the applications we test and we work with, they do not make it easy for us to finish our job on time. And I hope that these tricks will save you guys literally hours and hours and hours of misery. So just to first, a little bit of intro. I'm pretty sure everyone is familiar with iOS. It used to mean Cisco iOS and they used to mean software running on their browsers and switches and network gadgets. However, since Steve Jobs coined this I stuff nomenclature, iOS means something else. The first OS version came around in 2007 with the soap-like iPhone 3. And now, as of today, iOS 9 is the main version. And at this point, it used to be a toy for hipsters. I remember when I had this first iOS talk in 2010, 2011, we said that, hey, these devices are really cool. They look nice, they're awesome, but they're not really designed and not really suitable for corporate users. Now, it's significantly changed. So this statement is not true anymore as iOS is a fully blown and fully adoptable corporate device platform. And when MDMs came around, mobile device management tools and devices came around. This is more and more true as corporates and enterprises have means to control what kind of devices can hold their data. And as time progresses, we have a whole lot of mobile banking applications, document management things, what not. So more and more corporate data gets to iOS devices. And judging by the trends, this will not turn around unless something very bad happens around in the world. So a little bit around iOS application testing. The first phase is, as in all PAN tests, is static mapping, which there's a whole bunch of letters on the slide. I won't read everything aloud, but you guys can get grab around what this phase means. So this means like having a clockwork and taking a magnifying glass and to see what's inside and what kind of components the clockwork uses, what kind of gadgets it uses, what kind of APIs, what kind of platforms, what kind of third party modules, and so on and so on. So this information can be gathered during the first phase when we simply take a binary and start disassembling it and start peaking around within the binary. So the purpose of this whole operation is like anatomy in medical sciences. So anatomy focuses on bodies which are not moving. So everything is like static and they describe what they find inside. So this is kind of the similar thing we do when performing static mapping. What do we do here? When we have an IOS binary, we can easily extract a class header structure in case the application was written in Objective C, as the Objective C runtime framework relies extensively on reflection. Therefore, method names, class names, and other related info has to be compiled within the binary, and it's there, and it's easily extractable. As for the nomenclature, it's another interesting topic, so if someone with some kind of experience with IOS apps just takes a look around the names of classes, names of methods, and so on and so on, so they will get an understanding about how developers work, how they structure things, how they name things, how they use things, and so on and so on and so on, and these information can be very useful when it comes to later phases of assignment. So this phase is very boring, I admit. So it involves hours and hours of staring at Ida Pro and Hopper and other disassembly tools, poking around within the binary with a hex editor, looking around the class domain and so on, but it's worth the hassle, because later on, when you will start peaking around when the application is running, as in dynamic analysis, or you try to patch tree-up-brain detection routines or certificate pinning, or try to pinpoint where crypto happens. So in these later phases, the effort you put into static mapping will return very well. And when we come to the next bit, who knows what's on the picture? Okay, just tell me, what's the name of the game here? Yeah, that's the incredible machine. It's an amazing game. I remember that was a kid that made me a computer nerd in the first place, I must admit. So when we do dynamic mapping, we try to figure out what's within the binary and what happens when the binary is being run and when the application components are in motion. This phase usually involves a JBL-Connitpad, which I luckily have in here, with all the necessary modifications which is required to do app testing. We usually use some kind of debugger, which is like a pin and a needle to stick within the app, and this, by the end of the day, gives you a much better understanding of what's within the binary. And this is where the nightmare begins. As, obviously, application developers and especially security-related applications developers don't really want you to easily understand how their product operates. As we will see later on, this brings us to a whole bunch of problems we have to face when doing autopsy on an actual application or doing dynamic analysis. So first of all, as I said, we need to have a JBL-Connitpad to run on stuff, and most of the time when we use security-related products, they just simply say, hey, dude, this is a JBL-Connitpad, please, leave me alone. Otherwise, even if the app runs, we press the button and something happens like encryption, HTTP connection is made, or some kind of encryption takes place, something is written on the keychain and so on, so we want to know what happens under the hood. Once we have JBL detection, I'm pretty sure that anyone who has ever encountered having to patch a binary, having JBL detection to be patched out of the binary, this is one of the most frustrating things that can happen to you. So you spend hours trying to find the point where JBL detection takes place, you patch that particular section of the binary, and you run the app again with like a pulse over 180, and boom, JBL detection kicks in again, but at some other place. And you usually have to do this over and over and over and over and over again, and this is as boring as it sounds. So we want to have some kind of method to find each and every occasion where JBL detection is made. And last but not least, we have an interesting looking method within the binary, and let's say, hey, dude, I just checked out the class dump, I saw that this method does some kind of encryption, it takes two parameters, does some kind of right operation to the file system, I want to know where is what it's involved, that's a pretty usual question when it comes to analysis. And these problems are really, really time consuming. So this is the first line, which looks very scary, but bear with me, it's not going to be that scary. So when it comes to binary analysis, application developers try to make your work as hard as it can be. So I'm showing you a couple of things which are tools in the developers arsenal to screw with the PanTester. So by the end of the day, it's still possible to analyze binaries, but it takes much more time, and that's it. So every complication and every anti debugging or anti reversing effort you put into your binary just merely raises the bar in terms of effort and expertise. But by the end of the day, everyone will die. So first of all, this on the world loop, which is invoked with a very funny, very funny, fun role minus F on the world loop switch within GCC. And this means that as I indicated in the slides over there, if you have a function or a method, instead of having optimized it to one single occasion, if you use this switch, GCC will copy the same byte sequence to the regarded places one after another. As for inline functions, this is another obvious choice. So that means instead of having a nice and very easily patchable and very easily method swizzable, this word now exists because I just coined it. So if you have a very nice function, let's say this void do something thing, and in case it's a simple point where the application checks for the device being GRO broken or not, and it returns a boolean value, then it's pretty trivial to make it return and now each and every time we run. And this is pretty well known amongst developers. So only rookies use similar kind of GRO detection mechanisms. Instead, they opt for which crafts like inline functions. So whenever they have to invoke their GRO detection method, they just copy the corresponding byte sequence to the appropriate places. And there you go. You have even 200 times, 200 instances of GRO detection routines, and you would have to patch each and every occasion one by one. And that's very time-consuming. Other three features developers usually use. Object stripping is a standard procedure to secure binaries. However, on Objective-C applications, it's not that widely usable as I said. The runtime itself needs a whole bunch of information about the binary itself. Therefore, method names, class names, and other related info will be always there. So no matter what you do, at some point, if you use Objective-C, you have to know the name of the method and your objects. And by the end of the day, most of the time, you will be able to, as a test, you will be able to reconstruct the header structure. Reflection. Reflection. We love reflection because it makes static analysis a pain in the butt. That means that if you see a very nice function, which is invoked somewhere, but since its name is assembled runtime, Ida Pro won't be able to pinpoint that particular location for you. And unless you have a fancy trick up your sleeve, which I will show you later on, you will be sweating blood when trying to figure out where the particular function is invoked. And last but not least, my favorite one, use plain C++ instead of Objective-C. As Objective-C is a super set of C++, it's perfectly plausible and usable if you just put some Objective-C stuff within. Objective-C can be very easily, very easily obfuscated. And we saw that many times when it comes to MTM systems and other security-related products, they rely heavily on similar kind of operations. And the best thing is that these tools can be combined. So I chatted with a developer of a corporate MTM solution about how they detect jailbreak, jailbroken devices, and why I was literally spending days in misery trying to make that damn thing run on our jailbroken iPad. And the guy said that they have a mutator engine which takes a byte sequence which does some kind of jailbreak detection and they mutated it and they copy it to random places within the binary when they do the compilation process as assembly inlays. So that eventually means that you have literally hundreds of places where jailbreak detection routines are implemented. And another single function is called jailbreak detection which returns a boolean yes or no. And that's a really, really hard thing to analyze and to circumvent. Okay, first of all, before going to much, much more technical stuff, let's see a common problem. We have a screen and we want to know what happens when I press a particular button or switch or whatever on the screen. And how can we find out which locations and which methods in the application binary are responsible for handling user interaction for a particular screen? In order to, and that's the solution, but I will show you that. In order to do this, I have a nice iPad here with this damn vulnerable iOS application thing which is a very nice thing. So if you are trying to be iOS hackers or you are the people who make other people try to be iOS hackers, so go out, grab it, it's free and it's one of the best playgrounds out there for learning how to hack iOS apps. So we have this very nice screen. I tap on the jailbreak detection button and it says device is jailbroken and I want to see how the magic happens, what happens under the hood. So in order to achieve this, I have a whole bunch of options. For instance, when we come to the closed on as I said, this is the, try to make this bigger for you. So this is the kind of, you guys can see it. So this is the kind of structure you can extract from the binary itself. So it's truly doable. So I have a very, very, very, very detailed long list of objects, methods and interfaces. Let's try to find the word jailbreak and boom, yeah, we have a jailbroken detection VCN, which happens to be the exact one which we are looking for at the moment. But what if we can't do this in such an easy way? I mean, what happens if it just doesn't work? What can we do now? So I have here some thing for you. We'll first as a siege back to my device. Okay. There we go. And first of all, I use Cyclypt. Cyclypt is like an Objective C manipulation framework and that initially created a bridge between JavaScript and Objective C, which is a very pervert thing to say, but surprisingly, it works awesome when it comes to pan testing. So let's try to find the... Okay, so we're within the application and I will come here to see. So this dot key window construct shows you handle to the entire screen you see. So when it comes to manipulations on the screen, you can easily access those items on the screen from here. And if we say we want to see what's on the screen... So it's not very easy to see what are these massive titles. Come on. So this shows you like a tree-like structure about what's on the screen. And this can come very handy as if you look for the titles here, like, see, this is Jeopard Detection here. You have the menu item here. So everything you see on the screen will be in this tree-like structure. These are like buttons and these UI buttons can be found within the tree-like structure with this kind of... Okay. So with these sub-views construct, we can make our way down on the tree. And by the end of the day, we reach this UI button here. And this UI button objects obviously has a target which tells us which object it uses. So this in iOS or in other view controller terms, what we see on the screen is a view object. But we want to see the controller object to it. And this constructor, this trick can be used to pinpoint the exact method that's responsible for user interaction. Okay. Moving on. Next question. We have a very nice application and we know that somewhere it uses some kind of API call. And we want to pinpoint where it does it. So ideally, we need or we look for a method which does not make any kind of modification to the binary. So that means that if we handle with, like, an MDM product or something that checks its own integrity, we don't have to patch the integrity checking modules and we don't have to check the modules that check the integrity of the integrity checking modules, and so on and so on. Instead, we'd love to do this without ever touching the binary. So these typical areas of interest are JABRA detection, keychain users, serpenning, crypto, and I will show you guys two separate methods to pinpoint and to get those precious stack traces when it comes to an actual API call. First of all, I'm going to use GDB, which is very, very useful sometimes, even though it's not supported by Apple anymore. They instead opted for LRVM as they knew debugger of choice. So we go to this JABRA detection or demo mobile application again, and we go for... So what I did is simply followed up GDB and got into the process itself. So let's make it run. And I'm really interested in, for example, for the demo sake, where the stat function is used. So stat is used for file system interaction. It can be used for...it's actually a family of API calls. However, most of the time they are used for the check of whether or not a file exists or some characteristics of the file are there or not. Again, for instance, you can check whether or not a file is executable or not, and these tools are used, these command families are used in many JABRA detection routines. So we'd love to see how the application does JABRA detection in this case. So what we do is simply put a breakpoint on stat. We can easily define a couple of commands to be run each time that particular breakpoint is hit. So first of all, I'd love to print out the parameter, the first parameter of the stat function. I did my homework. I checked the API reference page in the developer.apple.com website, and I was very happy to realize that there's a string as the first parameter, and that contains the final name itself. And that, the pointer to that particular string object is handled over to the function in the R0 register, which I'm printing out here. I want to see where we are. In GDB, this gives you a nice stack trace, and that's it. Let's see. So we come to the piracy detection exercise, and again, I did my homework. I know that this piracy detection routine does what it names suggest piracy detection, and it uses this stat function. So I press the button, and, whoa, this is very interesting. This shouldn't happen. Okay. I am there. And as we see that the actual stack traces are here. So whenever, yeah, because the GDB didn't refresh itself, so it's still there. I can go on and to see how the breakpoints are hit and where these stat functions were invoked from. And this can be very, very easily usable. Many times applications imply some kind of anti-debugging aspect. However, that can be many times pretty easily circumvented. As I said, this is the GDB again for reference sake. The inevitable pro of using GDB is that it looks awesome. I mean, you're typing on a black screen, white letters. It's so hacker-like. However, with GDB there are many problems also. For instance, many times it's not perfectly feasible, either because your device cannot run GDB, or there is no hacked version of GDB for your version of iOS, or the application itself is actively preventing being traced with GDB. And that can be, again, it can be circumvented however most of the time it's doable. But the biggest problem with it, it's not persistent. So that means each and every time you want to tweak your application, you have to do this over and over and over again. And we want to use something more usable. I mean, it looks awesome on a demo when it comes to client presentations. However, when it comes to actual plan testing work, it's a waste of time, most of the time. So we want to use something that can be used in a more permanent fashion. Well, I'll show you. We're going to compile by the end of the day CDia Substrate Extensions. CDia Substrate Extensions, how hard work. It used to be called Mobile Substrate. And that means that on GDB devices, you can use CDia to dynamically load libraries to your application. This is pretty much the same concept as with DLAs in Windows. So that means that an application does not have to hold each and every feature set in the memory so that it doesn't consume that much memory. And this is especially hard to pick when it comes to mobile devices. This technology is used by a whole bunch of applications like VNC, SSL, KIL Switch, Snoopit. I'm pretty sure that these names ring a bell for you guys who have some expertise in iOS pentesting. And as a result, CDia Substrate Extensions are the pentesters choice. That means that no manual patching is needed. It's basically a simple RM function. So you don't have to delete a DLIP file from the file system if you want to disable your extension. And the creation, as we'll see, is trivially scriptable if we have the class dump, which I will show you. This is the... What we're going to use, our tool of choice is Thaos. Thaos is an on-device iOS 2 chain. And eventually, you don't have to have Xcode to develop iOS applications. You can use Thaos. Everything runs on the device itself. And besides full blown-ups, you can easily compile substrate extensions for existing apps. And this gives us a whole bunch of opportunities. So this means that we can inject whatever we want into an iOS application. This can bring very cool conclusions, as we'll say, the ROM. Okay, so before we move on, meet my really nice demo application, which is pretty much these five lines, or six lines of code. So this means that we check whether or not a particular file within the file system exists or not. And in case it exists, we invoke one function, and if not, we invoke another. And that's pretty much it. Even if you are not fluent in Objective-C, it's pretty straightforward what it does. So when it comes to this assembly, this is what it looks like. I hope you guys can... Ah, you don't see shit. I will find Ida Pro with you. Okay. This is the one. So I fired up Ida Pro and loaded my demo application. It's a very small app, so it takes no time for Ida Pro to load it. So this is the... I'll try to make it. No, it's a bit better. I'll close this one. So as you can see, that we basically implement this kind of function. So even if you are not fluent in this is ARM assembly, it looks horrible. However, these texts make it quite easily understandable what's going on. So we check whether or not this CDR app file exists or not. In case it exists, we invoke one thing, and if not, we invoke another. So that's trivial, kind of very easy to understand. So how can we bypass... Or how can we see where the actual... So the heart of this thing is this file exists at path function, which is a library IOS call, and as its name suggests, what it does is simply returns a boolean yes or no, whether or not a particular file exists or not. And this is the actual CDR substrate extension written in Objective C we are going to utilize to pinpoint where this particular function is run from. Just a quick overview. So this %OIG construct, instructs the framework to run the original function itself. So it returns a B boolean value, we have it here. We log some things, we log the struct case, and we return what has been returned. So that's like a proxy thing so that we inject api calls, we do something, and we return what's received. Let's see how this thing works in practice. Yes. Okay. Okay. There was my screen. I'll show you what this application does in the first place. So I quickly fire up. It means I have to delete my extension first. Okay. Yeah, I have my deal here. So just to clean up from the rehearsal of this demo. So the actual deal lips are found within the slash library, slash mobile substrate, slash dynamic libraries directory. And once it's done, we compile our binary, it will end up here. So that's pretty easy to delete. Whatever we need to remove or to disable an extension, we simply delete the corresponding download file. So how are we going to solve this one? Okay, we have for this a very nice interface called nick.pl. It asks us whether or not we want to create a tweak, an application library or anything else. I said I want to create a tweak. Let's call it test. Yep. Okay. And we need to add what the actual name of the package is, what we want to inject into. So we go here. We don't want to terminate anything else. And then we have a new instance, which does not anything but can be compiled. So this is a very useful thing to start from. And if we look at, here we have this tweak.exam, which is the actual place what we need to compile. So this is the file we need to fill in with Objective-c code. And there's a bunch of instructions how to fill in this file. So I have my substrate extensions ready already. We came in. Simply save it. Yes. And then we make it install. Okay. And then if we come to the log screen of my iPad, and we fire up, this is interesting because I compiled the wrong thing. Okay. So the next step here is now to compile the wrong thing. This should be the one. Let's try again. Okay. Now it says it's a tree broken. And we should be, yes, and there we have our precious stack trace. So we can see from where the file is this path function was called. And once we're there, we can easily evade GeoBrit detection also. Once we have a control about what the actual file or the actual API code returns, then we can use this framework to bypass GeoBrit detection without ever touching the binary. So basically this is the choice. This is the tool we use to bypass this very primitive GeoBrit detection method. So it's pretty easy to understand what it does. In case the parameter is this slash application slash CDL.app file, then we return a null, otherwise we return whatever is returned. And we compile this in the same fashion. Okay. We can copy the same I showed you earlier. Meanwhile, I delete what I made earlier. Okay. Now it's installed. And if I kill this thing and file it up again, we'll see that it displays a device clean message, which means that we essentially bypassed GeoBrit detection. And if we go to the syslogs, we see that our log message ended up here. So GeoBrit detection has been evaded. So this was what I wanted to say. Any questions? That was a question. So the question was what we do when it comes to encrypted binaries. So what do you mean by encryption? Like the Fairplay DRM, which is applied in the IQ's binaries? Yeah. Yeah. So the thing is that whenever you download an application from the Apple App Store, it's encrypted or not encrypted, but it's obfuscated with a system called Fairplay DRM. And there are tools to decrypt or de-obfuscate those files. So Clutch is one of the projects which can be used for this purpose. But if you Google it, you will find very nicely written GDP articles how to manually dump certain segments of the memory and how to calculate offsets to dump memories from. So Clutch is the word you're looking for. And then you have a decrypted binary and you will be able to play these games with those binaries also. Have you already used Clutch before all the inspection? Yes. You have to use Clutch before anything happens because if you try to load the Fairplay binary into IDEPRO, then it will explode because it will not be able to figure out what's within the binary. And we have time for one more question. Melt them. Thank you for your attention.
Black-box iOS application pentesting is a growing and hot topic. For most pentests, the most pain and effort is are consumed by the initial phases of the work, i.ei.e. basic mapping of the application features and where the individual features are implemented within the binary. We describe a MobileSubstrate based, semi-automatic approach for mapping security related features, such as encryption, jailbreak detection, keychain usage.
10.5446/18851 (DOI)
Hi, everyone. Welcome. So my name is Sebastian Garcia and I'm going to talk about the network behavior of targeted attacks. In particular, we are going to model this traffic to identify on the network. And this is part of a project that is called Stratosphere IPS project that is the project I'm working with. Wait, wait, I found the last one here. So we are going to talk a little bit about how we are researching on detecting this malware in the network. So we are using some machine learning tools and trying to see what's working and what's not working. So here in the audience, who is working detecting malware or any type of botnet in the network? Who is working with there? Upstairs? No one? No? No malware detection? No botnet? Okay. And specific some APT attacks. Someone is focusing on APT. I say, hey, I just don't know. There we have someone there. Okay. So actually we are working with a lot of malware and botnets, but we like to focus on APT for two reasons. I will speak now about the first reason and later about the second one. The first reason for me is that APT, I'm not going to talk about, yeah, they are advanced. No, they are not advanced. They are persistent, not persistent. A lot of people know a lot about that. So what I like is that the goal of the APT is very specific, right? So when you are being attacked and they are attacking someone, they know who they want to attack and they know what they want and how to get it. So this is not the usual malware that it's, I don't know, sending click, fraud in, or malware, or spam, or whatever, right? It's not money or it's not usually a lot of money, but they are trying to get very specific information. And this is making the attacks very, very difficult to detect, right? In fact, they are not such advanced tools, right? But they are doing, it's like normal attacks. Some phishing emails, some malware, very, very simple rat, remote access tool, and that's it. It's working, right? If you know the citizen love people from Canada, they research on this a lot, and they found that most of the time they are using very, very normal malware, and only once they witness and zero they attack in an APT case. So usually we are dealing with very simple stuff. The problem is that it's very difficult to analyze. So if you want to analyze, oh, there's people here. This is so close. So if you want to analyze APT, you can get the malware, you can analyze it, you can open the binary. It's not what I'm doing, right? I'm not a binary analysis guy. I like the network traffic. So how do you like the network traffic? Okay, I want network traffic. How do you get the network traffic? Okay, I can, I don't know, execute the malware, right? So I go there, I execute the malware. But what is the problem or what is the difficulty of executing this APT malware in my network? Why it's not the same? What do you think? What do you say? Yeah, I have the real malware. I execute it. And even it's connecting, right? I will say that the malware is up, it's running, the common and contrary is running, it's there, everything is perfect. Why? Why it's not the same, the analysis? No? Well, no human factor. In fact, it's something like that. I'm not the target. This is targeted attack. I am not the target. So they are not going to attack me as they are attacking some other guy, right? So it's very important being the target to have this traffic because I don't care about the packets in there. I don't care if the packet is TCP or UDP. What I care is the intention of the attack. I want the behavior in here. I want the malware out or saying, okay, now get the document files. No, no, no. Now forget the document files. Get the screenshots. Oh, hey, it's doing something. Get the key logs. That's what I want. I want the intention here. I want the behavior. And that's why it's so difficult to get this information. So when we try to execute it, the first thing we find is the lifetime of the campaign is very short, right? So if you are executing the malware like 20 days after it's being captured in a real environment, that's it. You are not going to find the infrastructure there. The common controls are not working. Nobody is there listening. So the execution is not so good for us, right? So we can modify, write malware. That's what we did. We get some normal malware. We modified and we executed ourselves and we attack ourselves, right? That is completely horrible. Specifically because there is no behavior. Yeah, I can attack myself. I can attack you. You can attack me. But we are not the real players here. So we did this to get the best traffic we can. But if you are analyzing this type of malware, this is an issue, okay? So this is the first reason why we are going to work with targeted attacks because we like them, because they are very specific, very difficult to detect, and they are quite simple. But if you want to detect this in the network, imagine that you are going to detect this in the network, okay? So, you don't have a talk right now? Okay, sorry. I thought he was giving a talk right now. Okay. You can go to your talk if it's time for you to go here. So if you are trying to detect this in the network, right, you have solutions in there. You have a lot of software. What are you doing? Really in the network? I'm not talking about antivirus stuff, right? I'm talking in your network. So you are putting some firewall in there, some IDS, IPS filtering. You start playing with indicators of compromise, right? You are registered in a lot of feeds, so you get all this information, a lot of domains, URLs, IP addresses. Your feed is coming all the way and you are blocking, blocking, blocking, filtering a lot. And also, you have a lot of fingerprints. So you have snort, you have bra. Okay, actually bra. It's not with fingerprints. It's with a beautiful language. But you are using fingerprints. You are using payloads, right? You are capturing these and you are stopping them. And if this is not working, what do we have? Well, the last, last, last test, latest of the tools we have is behavior, right? Anomaly detection. So a lot of people is working on this. Anomaly detection is nice. It's like a fooswore. If you say anomaly detection, it's awesome, right? Nobody knows what's going on. But hey, yeah, we have some behavior in here. So what's the issue with anomaly detection? It's working. Okay, who here is using some anomaly detection software or product in the network? No? Here? No one there? Oh, there. We have one. It's true. They exist. Now, it's, it's, this is working. The problem with anomaly detection is that for anomaly, you need to do what is normal. So you need the normal first and then you spot the anomaly. And how do you know what's normal? Because we are human beings. We are changing all the time, our traffic, our patterns, our ideas. So that's an issue. If you go to the network, what is normal is changing all the time. So you should adapt again. And then you detect some anomalies. And then when you have the anomaly, it turns out that an anomaly is not an attack. And this is something that usually the people working with anomaly detection tend to forget. An anomaly is an anomaly. It's not an attack. So who is going to say if this anomaly is an attack? Okay, so you need people there watching it and reviewing and saying, okay, yeah, this is an anomaly. No, yes, this is an attack. It's another attack. So it gets very, very complicated. And in the end, you need people working on that. So there are some issues here, right? The issues we have is that first, the lifetime of the indicators of compromise is unknown. So you're blocking some domain. You're blocking some IP. How long are you going to block it? One day, one hour, one month? Okay. How long is that IP in the list of block as IP? Nobody knows. Well, some people is analyzing this, but usually this is not information that's in there. If you see the analysis, some information is there for three months. And three months is a lot. That domain is down and not working in less than three days, right? So why blocking it for three months? So nobody knows how to do this correctly. And of course, who is verifying this? Who is verifying that the domains you got for blocking are really, really many issues. Well, some people, I hope, I don't know. But if you go to virustotal.com and you search for www.google.com and you say, hey, give me some indicators of that, you will find like 5,000 people saying this is my issues. And you will find like 17,000 people saying this is normal. So actually this is confusing, right? If you have an automatic tool working with this data, you will have a lot of domains that are false positives and you are blocking them. So the errors and the verification is very important. And nobody is looking at this right now. Oh, sorry. So also you have a huge amount of information. One malware can generate dozens of domains. I don't know. Dozens of IPs plus payloads plus finger pins. So you are blocking a lot, a lot, and a lot more, more, more, more every day. And actually you don't know what you are blocking. You don't know what you are not blocking. That's part of the game. And also this information is static. So it's not changing. It's not evolving. It's not adapting. That's an issue. And finally, for the attackers, it's very, very easy to adapt to these measures, right? The cost of adapting is not so much. Changing IP, I have a lot. Like domain, I can register thousands, right? So I don't care. Actually, the issue with attackers is this. They don't care. I remember once in Reddit reading an AMA of a real botnet malware author. And he said, yeah, I have, I don't know, something like 100,000 bots and I can use them. And I'm sending spam and some user was asking the malware author, hey, how are you sending the malware and checking that the malware, sorry, not the malware, the spam, sending the spam and checking that the spam is being read and which is your best way of sending the spam in such a way. And the guys say, I don't care. Hey, but if you send the incorrect image, the people will know and they will want to be, oh, sorry, won't be able to open your email. And he said, I don't care. You pay for me? I send your one million spam. You don't pay? I don't send. You pay me? I send. I don't care if the email is open or not open or whatever. They're making a lot of money and they have a lot of resources. So this is an issue and most of them, they don't care. They just get another domain, they adapt, another IP, that's it. They're blocking it, regenerate the malware. It's difficult. Yeah, it's costly, maybe, but it's not impossible, right? So we have this issue here. And with anomaly detection, like I told you, most of the time it's very, very difficult to know if it's working, right? So what are we going to do here? What we are working in the university is in some behavior method, but instead of focusing on anomaly detection, we are focusing on the behavior of the malware traffic. So we go to the network and say, okay, this is malware. I know it's malware because I'm analyzing it. And I want to learn which is the behavioral pattern of the malware in the network. And that's what we are going to do now. So the stratosphere IPS project is the core of the project in the university. I didn't say it, but it's in the university in Czech Republic. So you can find it online. Everything is published. And these are the four pillars or main ideas of the project. The first one is free software. Why we want free software here? It's not because we love free software. We love free software, but it's not because of that. It's because we know the community it's able and we want the community to verify what we are doing. We need the people checking it, downloading, testing it. And we need everyone saying, hey, this is not working. This is bullshit. No, this is working. This is, you have errors in here. We can make it better. We can collaborate. We can send you stuff. We can not or stop doing that or something like that. So free software is one of our main pillars. The second one is NGOs and civil society organizations. So at some point the citizen lab people say that in their survey that the NGOs, the non-governmental organizations, they are in a critical situation because they don't have the resources to buy very complex tools for protection. They cannot buy from very large companies. But anyway, they are being attacked as a very, very powerful government. So for example, they work with the Dalai Lama in Tibet. It was attacked by China. So in China it's a very, very powerful country and they're attacked the Dalai Lama with success, right? It was completely successful attack. And they didn't have the resources. They didn't have the money, the people. They cannot defend themselves. So we are focusing in these type of organizations where they are very, very amazing targets for the attackers, but they don't know how to do it. They don't know how to defend. So this is the second pillar of the Stratosphere IPS project. The third one is the machine learning and the behavioral models. We want to have our research working in the network. We want to have, listen to this, our research to be useful. So we want actually to go to the network and plug it and we want it to work. And this is usually the research people don't like this so much, right? You are doing something. It's awesome. You publish papers, a lot of them. And when you are trying it in a real environment, yeah, maybe it's not working. So, and the last pillar is the verification. We want this to be very, very, very fine. We want it to try as much as we can to see what's going on, how we are doing, it's having errors. It's not having errors. Which errors? Why we have these errors? So these are the four pillars of the Stratosphere. Now, how are we doing this? How are we working with machine learning in the traffic? So we start with this idea of less is more. So when we start working in machine learning, you can be tempted to work with a lot of features and we are going to say, no, no, no, no, no, use less information. This is the first pillar that we are going to talk about. The second one is the disassociation. We are going to disassociate two models and I'm going to show you now. And the third one is the verification, okay? So this means we are analyzing the behavior of the connections, not the behavior of the host and not the behavior of the network. This means that if you are going to the network, I don't care about the behavior of the 3,000 hosts, I care about one simple connection and that's why we are able to create this behavioral model. Because if you try to create the behavioral model of a computer itself, it's very complex. The user is very complex. So we are not doing that. The second point is the disassociation. And that means that the representation of the behavior in the network, how we look at the behavior, it's separated from how we detect the behavior, okay? Usually this is all together, but we are separating it. And finally, verify the models with real data. We need real data here. So less is more. This means that when you connect, sorry, when you connect to any other computer on Internet, your behavior is the same. So you connect to Gmail and you are checking emails, you are chatting the way you chat, the way you check Facebook, the way you use a website, the way you use your bank account is usually the same all the time. And this is going to identify your behavioral patterns, right? The second is that we group the flows, all the flows in the network go into specific service all together. So imagine that you are connecting to Gmail, web server, so we get all the packets and flows that you are sending to that port 80 of Gmail and we say this is your connection. And we are going to analyze that. And finally, as the connection is composed of several flows, we can see the behavioral patterns in here. So in the case of malware and in the case of view, when you are using, for example, any web page, you are going from one state to the other, like chatting, not chatting, like downloading stuff, no downloading stuff. Like putting information in a web page, not putting, downloading a picture, looking at a picture, clicking in a picture, you are going jumping from states to states. And that's what we want to model, right? So each flow is going to get its own state in our model. I want to give you one state for each flow you have. And our model for the states, it's based on four features. And these are very simple, right? We are looking at the size of the flow, the duration of the flow, the periodicity of the flow, and the time between flows. So I'm not going to get into the periodicity, because it's quite an issue to have that information. But you can see that this is very simple, right? It's like, why are you using this? You can have very, very amazing features in here. And the reason is that we try those amazing features and they are not working. They are too complex, right? And when the model is too complex, and then you go and check it, and if the model is not working, you don't know why. Or worse, when the model is working and you are detecting, you don't know why. So at some point, it's very, very difficult to work with that. And that's why we have these four features in here, okay? So what we are doing with these features, we are creating this table, horrible table. The table is saying, okay, you got one flow, okay? And the flow has a small size. And then the duration of the flow is maybe medium. And the periodicity of the flow is weak periodicity. So I'm going to give you a capital V. Or if the periodicity is weak, and your size is medium, and your duration is long, I will give you a capital F. Or you have a weak non-periodicity, or strong non-periodicity. So we assign letters and numbers to each flow in the network based on these features, okay? And finally, we are using some symbols in here, like the dot, comma, plus, star, and zero. And it indicates the amount of time between the flows. Because having a periodicity of five minutes is not the same as having a periodicity of three days, right? Periodicity is periodicity, but it's completely different behavior. So we are trying to get this information here, right? And if the flow has a time out of one hour, we are putting a zero, okay? Special symbol there. So let me show you how we can look at this. Can we use it? Thank you very much. Okay. So for example, I'm not sure if you can look at this. I will see. Can you see that or not? It's very here, upstairs. Can you see that? No? Completely no? Maybe if we can, can we turn down the lights a little bit in here? Can we try that? No? Okay. I will walk through it. Don't worry. It's horrible anyway. So don't worry. So each line here, it's a connection. It's one computer connecting to other computer connecting to some specific port. So each letter here identifies one flow, right? So here you can see that, for example, there is a connection to DNS service. These red letters that I'm sure you cannot get from there. So this is r dot, r dot, dot, dot, r plus. And if you see the letters, there are no periodicity here because the periodicity was between the letter a and the letter i, right? So if you go to r, you are not periodic anymore. So if you look at this, you can say, hey, actually, actually, this is not periodic and this is a port 80, port 80, and this is a normal connection. This is a normal computer doing everyday tasks. If you look at this specific connection, you will see that there is a very strange port, 9,131, and you can see some periodicity in here, hmin periodicity. And this is a Tor connection. So the web service of Tor, when you are updating the Tor service, you get some periodicity in there, right? So here you can see a lot of letters. As you can imagine, most of the connection is just one or two flows, right? Because it's a normal web page. You go to a web page, download something, download an image, that's it. You are not accessing every web page for hours. So let me show you another one. Oh, my God. So I will show you, for example, this one. This is a malware that is called Flu, and we will use it later. And you can see here that it's also connecting to a lot of UDP and TCP connections. This is not periodic, not periodic at all. And then here we have some periodicity, i, i, i, h, h. You can see a pattern here, right? So this pattern is very, very... Sorry, if you didn't listen to me. It's very characteristic of this command and control, right? And here there is another command and control. This is a very periodic connection, and it's keeping on going and going and going. So this is one malware that is called Flu, and this is a real execution. So another one. I want to show you, for example, okay, the MuREF botnet, right? A real execution of the MuREF botnet. So we executed this in our lab. And you can see here a lot of connections to port 80. And look at this. Wow, this is a command and control. But this is not periodic, right? And this one, yes, this is periodic, and this is not. And this is not periodic at all, right? So MuREF have different command and control, and each command and control is having a different behavioral pattern. Actually, we can know that this is a type of command and control, and this is another type of command and control. And you can see here, yeah, yeah, yeah, it's completely sending. And now this one is periodic. And you can see also the pattern, right? You can see here also some timeout. So this is how the letters look like. Wait for it. I want to show you, for example, for example, this one. Oh, come on. So no, no, no, no, no, no, no, here. Okay. So this one is more difficult to see. But you know what it is? It's the activity traffic. So it's the traffic from your computers. So this is how the computers here are behaving, right? And you can see some people getting to some web pages, UDP traffic, TCP traffic, and most of the people is just connecting to web page, and that's it, right? You can see that there are no periodicities, no command and control channel, nothing that is behaving like something that looks malicious, right? So this is a very easy way to look at a lot of traffic. Here is for verification, right? But the tool is looking it automatically. And you can see, like, okay, there is no model here that looks like a command and control channel, or some attack, or something like that. Of course, we are not trying to detect, like, a specific attack, like, go into a web page and exploit, and that's it, right? That's for that, you have antivirus, you have a lot of tools. We want to see what's going on in the network. We want to see the behavior here, right? That's why if you only have a very short attack, we are not going to get it, right? And actually, we cannot. The tool is not for that. We want to see when you are being attacked, like in an APT, and your documents are being exfiltrated, for example. This we can capture. So wait. I wanted to see also, okay, I will show you the last one, it's a ZEUS botnet. Okay, come on. ZEUS botnet. So this ZEUS botnet is a very, very large capture. It's like 25 days, and you can see a lot of traffic here. Look at this, right? A lot of traffic. But you see a strange stuff, right? I will stop it, just so you can see. So you will see strange stuff, like zero, zero, zero, and some periodicity, but then this is not periodic, but then it's periodic, but then it's not again. Do you know what this type of traffic is? This is the ZEUS botnet connecting to Google. So these are all Google IP addresses, and ZEUS is using Google for a lot of stuff. But you can see, if you remember the normal traffic, that even when you access Google, your traffic does not look like this. So we can differentiate between a normal Google connection and some malware abusing Google, right? And here, these are the common and contrast, right? You can see a very periodic string, a very periodic string in here, the behavior, even this one. Look at this. It's periodic, but we have nine zeros. That means nine hours between two flows. So ZEUS is sending a flow, waiting nine hours and sending another one. And we can capture like this, right? You can see here the pattern, and we can create a model from that. So going back to the presentation. So once we have these letters, what's going on with the behaviors in here? So this malware is generating the same behavioral patterns over and over again. When it's connecting with the common and control, it's the same behavior. Actually, we can see when the common and control is down, because the malware keeps connecting, but the behavior is different, right? So we can distinguish these situations. Also, changing the behavior is very costly for the attacker, because if you want to connect to all your bots at the same time, and you want to give orders at the same time, you need some type of synchronization in there. And if you lose the synchronization, it's more difficult for you. You cannot use all of them at the same time. It's maybe more difficult to make a DOS attack, right? So at some point, you can change the periodicity. It's okay. We can still capture the change, but you want to connect. If you don't want the common and control, you cannot control your bots. So you need the common and control, any common and control, right? And that's what it's costly for the attacker. So this behavior does not expire easily, of course. The infections can go unnoticed for hours. So how much time are you willing to wait for a solution in your network? Usually, we say, I want real-time detection, real-time. I want to see the red light there, right? Very, very quickly. But actually, the computers can be infected for hours or day, and nobody knows, and nobody cares, right? So you can tell the administrator, yeah, go there and clean that computer, and it's going to take hours. So there is enough time here to capture the behavior we need. This should not work in one minute, right? And finally, we collect both normal and malware behavior. We want them both. We need to know what's normal. We need to know how normal looks like. And then we can implement this. So how can we implement the detection for this type of traffic? Okay. So this is the first part. Do you remember this association? The letters is how you look at the models. We are not doing detection in here. So far, no fancy machine learning. So now that we want to implement this for detection, how are we going to do that? So the stratosphere project, now it's implemented two models and two more are underworking. I will talk about the first one, that it's the interpretation of the transition from one letter to the other letter as a mark of change. Okay? So how are we doing this? Okay, this is very, very easy stuff actually. If you have the letters in here, A, A, Z, plus, D, plus, D, plus, we are looking at the transition from each letter to the next one. And this transition, we model it as a mark of change. That means that we have a matrix in here that's saying, okay, so the probability to go from the letter A to the letter comma, it's one or 100%. And the probability from comma to A is 0.5 and like that for everything. So we learn this transition probabilities, we create a matrix and the matrix can be looked like this. This is the same, right? It's just a diagram. But it's like, okay, from going from A to comma, the probability is one, from comma to Z, it's 0.5, from C to plus. So we can model how the transitions were in the original malware. Okay? And when we have these transitions, we create these mark of models of the known behavior. So we can look at ZUs for a long time or MuRe for any botnet, and we can capture this model, this behavior, we create the mark of change, the matrix, everything we need, and we have the model ready. Okay? Now that we have these models that I know what they are. This is a common encounter that is down, this is a common encounter that is working, this is another type of attack. Now we can get unknown traffic from an unknown network, and we can try to compare and say, okay, the question is, which is the probability that this traffic was generated with this model? That's why we are using the mark of change. So we say, okay, well, the probability is actually very, very low. Okay. And which is the probability of being detected by the second model? Okay, it's like that. And the third one, and then we choose, and we say, okay, from all this model, including the normal ones, the probability that you were generated by a common encounter botnet is this one. So I will say that you are a common encounter. Okay? This is how it's been detected at the end. It's not perfect, of course, but so far it's working. So I want to show you some more stuff. So the first thing I will show you is how to see the difference between two models. So let me show you what is this. No, go away. Okay. So we show you the difference between malware that it's called Zeus. You don't see there. It doesn't matter. It's called Zeus. And a malware is called Vavo, or I'm not sure the name exactly. So the second malware, Vavo, was created by some people that maybe is here in the audience. I don't know. The crisis lab people there. So thank you very much. It was awesome. Go and see his talk later because you will learn a lot. They created this amazing malware and they are trying to see how other tools detected, right? So this is amazing for us because it's like, okay, something that it's very real and very difficult and very well done. So most of the people that it's trying to detect that in the computer, they are trying in the host. But we are going to see how, oh, I have this one here. So how we can see the traffic between Vavo and other malware like Zeus. So this detection that we are going to do, sorry, I copied the wrong line. I'm going to show you what happened if we compare the model of Vavo with the border of Zeus. And this is a comparison that it's saying, okay, so the distance between the Vavo malware and the Zeus malware is actually very close to one here. These are the first 10 flows. And this means that they are quite similar. The behavior of Vavo is quite similar to one of the behavior of Zeus. But if we keep looking at these, not 10 flows, but give me 30 flows, and we can see that behavior starts to change, starts to diverge. And if we want more flows, like 50 flows, it's more different. And if we want 100 flows, it's more different. So every time we are putting more flows, we can see that the behavior between Vavo and Zeus starts diverging. This means that the early behavior of Vavo and these Zeus is similar. But later on in the network, they grow apart. So we are trying to use this Zeus behavior that we know for a long time to detect the Vavo traffic in the network. So for doing that, for doing that, I will show you something like that. So this is going to, I hope you see something in there. So I'm going to run Sonic experiment in the tool. I didn't tell you, but this is the stratosphere testing framework. It's one of the tools we are having in the project for experimentation. So I'm going to run an experiment. I'm going to say, okay, use the Zeus model and get all the traffic from Vavo and tell me how you are detecting it. Tell me when you are detecting what. So when you run this, I don't care about the description. When you run this, it's saying, okay, I'm going to separate the traffic every time slot, sorry, like five minutes or 10 minutes or 15 minutes. And in each of these time slots, I will tell you if the models are matching or not. I will tell you, okay, yeah, I detect something or I didn't detect something. So I will go back here or up, sorry. And you can see, this is going to be very quick, but you can see here that in the first time slot, they say, okay, starting the time slot here and it's from 0, 0 minutes to five minutes, you can see that there are some IP addresses in the traffic and there are no ground truth levels. That means that when I was looking at the traffic, there is no indication that this is a malware behavior yet. It's just some packets in there. And also, we didn't predict nothing. So there are no detections, no known traffic in there, nothing happened so far. In the second time slot, I'm sorry, I used the blue letters in here. That is horrible from a design point of view. I'm sorry. But you will have to believe me here, in this blurry stuff, it says, botnet. So it means that the bubble malware in here, it's using this IP and we know it's bubble, it's using the command and control and we put the label botnet in there for sure. This is a botnet or a malware. But we didn't detect it. So our model is not matching here. So in the first two time slots, there are no detections. That's why the type of error is false negative because we miss it. We didn't capture. But in the next time slot, that means that 15 minutes later, we will able to detect the bubble botnet with the Zeus command and control model. So we have a true detection in here. We have a true positive. This means that at this point, the model matched and we were able to capture it. We were able to detect it. But if you keep looking, the next time slots, we didn't detect it because you remember that I showed you that the models were diverging. The models get different from time to time. So after some point, this is not similar anymore, but was enough for detection. So this is one way that we can experiment with this. And at the end of the experiment, yeah, you have what's been detected and not, you have all the fancy measures, true positive rate, precision, you can have them all. And then you can see, okay, is this model enough for detection or not? Or we need more, right? So this was an example of using a Zeus model for detection of Babo. There are other models. We try also with model call, I don't remember now. It was called, okay, another botnet that I don't get now. That also was able to detect the Babo. And then we can use now the Babo model for other stuff. Of course, if I'm using the Babo model for detecting Babo, it will detect it, right? But hey, that's cheating because I don't know Babo in advance. I should find Babo in the network using whichever tool I have. That's why we are using the models already in the database for detecting the new and known traffic. So I will continue with the presentation. Yeah. So we can see the distance between models. We can experiment. That's okay. Use all these models in this traffic. Tell me what you find. And especially tell me what and when you find it. And this detection is done by generalizing the models. So I won't speak about that here, but sorry. The Markov-Chains models can be generalized in such a way that we are detecting similar traffic, not exactly the same traffic, right? So I want to say something here. Sorry. The verification. Usually the people are asking, okay, is it working or not? It's like if you go to any antivirus company or any protection company and you say, hey, your product is working or not? How do you know? I don't know. Maybe yes. Maybe no. It depends on a lot of stuff. There is no easy answer here. If somebody is telling you, yeah, our product is working very, very amazingly, I will doubt a lot because if I change the network, if I change the attack, if I change the timing, if I change the normal people, maybe if I change country, your detection is going to have some issues, for sure. That's why nothing is working so, so well. So for us, it's very important the verification. So yeah, our model is working when with this data set, with these levels, with these people, with this traffic, and this way of verifying. Because do you remember I showed you the experiment using five minutes time slot? If you are using 10 minutes or you are using one hour, the results are completely different. If you are using one minute, it is completely different. So also it means how do you consider a detection successful? For example, you have a malware in there and you want to say, yeah, I can detect it. What can you detect? Can you detect the whole traffic? Can you detect each packet as malicious? I say, no, well, each packet, maybe not. Okay, can you detect each flow, each connection, each IP address? What can you detect? So it depends how you count the detections and then you have the final statistics saying, yeah, we have a false positive rate or F measure of 99, 9, 9, 9, 9 percent, right? So be very careful when the people is giving you these type of results and you say, yeah, it's amazing or no, it's not working. Maybe it is. So we already said, depends on the data set, the time frame and the verification method. And that's why we are using and we are publishing a very large data set of malware traffic. You can find it in the page of stratosphere.ips.org. You can go there and there is our data set. You can download it, a lot of labels in there. You can ask questions, ask for new data sets or whatever because we need this to be verified, right? And having malware data set is very, very difficult, but having normal data set is far more difficult. But the data set, we have a lot. But normal ones, who can have the traffic there with the labels that is saying, yes, this is normal. So we are doing this very slowly. We are going to any computer and checking. Like, is this normal? Okay, show me the computer. Yeah, you're not infected. You're not doing something stupid. You're not attacking or whatever. So, okay, this is normal traffic. So we are very fine. It hosts by hosts. And that's why it's so important for us, this traffic. Finally, we want to compare approaches. What other tools are doing with this data set? What are they detecting? What they are not detecting? And this is very, very important for predicting the performance. So I will stop here. I want to say that the network behavior for us are very, very important. And we think that this type of work using machine learning, artificial intelligence, especially on behavior, it's going to give us very good tools in the future. So that's it. That's the way of the project. If you want to go, sorry, the people upstairs, you're not going to make it. Stratosphereips.org. And that's it. So, any questions? Oh, sorry. Yeah, yeah, okay. So the question was, what about traffic like streaming or computer games? And this is a very nice question because this specific type of traffic can be very, very tricky, right? For example, we have issues with tunneling protocols, VPNs, NAT when you have like 1000 computer behind one NAT, or even with DNS, just the simple model of DNS. Imagine this. You are in a computer using DNS traffic, right? Normally, because you are normal, I hope, people doing normal stuff. And then you are infected. And the traffic of the DNS is mixed. Your traffic and the malware traffic is mixing one connection. So you have two different behaviors generating similar packets and they are very difficult to distinguish. So far, all the gaming we saw and the streaming, we can differentiate. For example, our worst enemy, I will say, is the online music, like online radios. These type of websites are generating a periodicity that it's very hard to distinguish from other malware. So we have to be very careful with these models. That's why we are training them. And to each model, we can put, it doesn't matter, some thresholds here. So, okay, this model is very good. This model is not so good. So when we are using it, we know where to draw the line. Okay, don't detect with this model so much because it's matching a lot of false positives, for example, right? And then the other part of the question is that in the future, what we are going to do is that we are going to get all the behaviors of your computer and we are going to take a decision based on all of them. So I don't care if you are doing something malicious. What I want to know is, are you also doing something normal? Are you doing something like command and control? Like, how is the behavior of all your computer at the same time? So that's better for differentiating these very weird protocols. But they are tricky. It's true. Sorry. Another thing related with this is that when malware starts mimicking the normal behavior of the people, that's why it's very difficult to detect it, right? The servers. No, not so far. I have to say that if you care about, okay, the question is within this sense. I don't care or the people don't care so much being attacked because you have a lot of way of stopping that. But the issue is that you cannot detect when you were attacked successfully and you are sending information and you are communicating when the attack was successful. So that's why we want to detect. When the attack was successful and nobody is detecting. Not the antivirus, not the firewall, not the anomaly detection and you don't know what's going on. So that's in the place that we want to say, okay, we can detect that, right? The rest of the attack will lead to the firewall or the administrator. Yes. Another question? There. Can you see? No? The people that couldn't read the slides? No? There. No question? Okay. Thank you very much and enjoy the rest of the conference. Thank you.
The network patterns of Targeted Attacks are very different from usual malware because of the different goals of the attackers. Therefore, it is difficult to detect targeted attacks looking for DNS anomalies, DGA traffic or HTTP patterns. However, our analysis of targeted attacks reveals novel patterns in their network communication. These patterns were incorporated into our Stratosphere IPS in order to model, identify and detect the traffic of targeted attacks. With this knowledge it is possible to alert attacks in the network within a short time, independently of the malware used. The Stratosphere project analyzes the inherent patterns of malware actions in the network using Machine Learning. It uses Markov Chain's algorithms to find patterns that are independent of static features. These patterns are used to build behavioral models of malware actions that are later used to detect similar traffic in the network. The tool and datasets are freely published.
10.5446/18849 (DOI)
Welcome. Hi, my name is Omar. Thank you for coming out to listen to my talk. Today I'm going to present you an offensive security network, offensive network security research related to the nation state attacks targeting telecommunication networks. Just briefly, what we are going to cover is introduction to the telco network architecture and the network protocol that are being highly targeted such as GRX network architecture and SS7 protocol. And the main concern of this talk is one of the most credited government implants, region malware. Before delving into it is capabilities, I will try to briefly remind the root kit techniques as region is a multi-component long-term intelligence gathering root kit. Afterwards we will browse through region capabilities and then analyze how it could be weaponized in an offensive JSM network hacking. As recently there are more technically complex implants discovered by the researchers. We will briefly make a comparison. And finally I will present you a demo to show you how some of the techniques employed by the region implant could be re-implemented by a high-level programming language such as Windows driver development kit and WinAPI programming in C++. Briefly about myself, my academic background is computer science. I am currently working for KPN telecom is a red team which is also known as Royal Dutch telecom. I used to work for companies like Verizon, IBM, IES. I performed security assessment on my day-to-day work and I am very interested in malware analysis and root kit techniques. And this is red team. We are based in Amsterdam and it is only six minutes from Amsterdam is most popular center called Red Light District. What inspired us to carry this research was to analyze and determine attack services of JSM and inner JSM networks. Governments are not only hacking their own citizens but spy on each other by covered hacking cooperation with tools like Regine and stealth malware. Surveillance programs reach a crazy level. Recent leaks from on media conference that network devices, telecom networks are the victims of these programs or directly contributors of them. Once Regine hacking campaign revealed pretty much each and every telecommunication company got paranoid and tried to make sure that they haven't been affected by the same attack. And root kits are really fun. That are requiring a lot to learn about all the internals, kernel working principles and the computer architecture. I am sure not only understanding the incident but also to be able to reproduce and simulation of attack mean a lot to the tools who day-to-day work is to break systems. JSM network architecture is very complex. However, let us try to break it down the following core elements. Global system for mobile communication stands for JSM network is developed for a digital mobile radio communication over wireless and voice and the mobile communications. GPRS and extension of JSM network that provide mobile wireless data communication. The JSM network consists of the following elements from a security perspective. The important ones are mobile station base transceiver receiver, base station controller, base station subsystem, mobile switching center, authentication center, home location register, visitor location register. For a researcher, which are these components are highly being targeted are the mobile switching center. This is the component for digital ISD and switch that sets up connection to other mobile switching centers and to base station controllers. Mobile switching centers are the form of wired backbone of JSM network and can switch calls to the public switch telecommunication network. Equipment identity register is the equipment, is a database that stores international equipment identities that's known as e-mail numbers. All of the stations within the network, e-mail is a equipment identity assigned to the registers, manufacturers of the mobile stations and equipment identity register provides security features such as blocking calls from hand held that have been stolen for example. HLR is a home location register, is a central database for all users to register to the JSM network. It stores static information about subscribers such as international mobile subscribers identity, subscribe services and a key for authenticating subscribers. The HLR also stores dynamic subscriber information for instance the current location of the mobile subscriber physically. An authentication center associated with HLR is the authentication center. This database contains algorithms for authenticating subscribers and a necessary key for encryption to safeguard user for authentication. And lastly, VLR visitor location register is distributed database that temporarily stores information about the mobile station that are active in geographic area for which VLR responsible. A VLR associated with mobile switching center in the network when a new subscribes roams into the allocation area, the VLR responsible for copying subscriber information from HLR to the local database. GSM uses various interfaces for communicating among network elements. Communication also occurs over the interfaces to the management databases. The management database components are VLR, HLR, authentication center and equipment registered to identifier. Communication might traverse multiple MSCs but ultimately must reach the gateways. The separate interfaces exist between each pair of the elements. Each interface requires some programming protocol. The network switching subsystem is the heart of the GSM system. It connects wireless network to the standard wired network. It is responsible for handoff calls from one base station systems to another to perform services such as charging and accounting and the rooming. Different signaling protocols are used in some of the interfaces involved in only control signaling protocols with no traffic. For example, no traffic is generated on the interfaces between HLR, VLR. These interfaces carry only signaling mobile application part on the SS7 protocol. How these are related to our research, the region malware specifically targeted GSM networks. The antivirus companies say region has been developed to be a low key type of malware that can potentially be used in the espionage campaigns lasting several years since 2008. The company was only able to analyze actions decrypting samples files, discovering actions are particularly difficult to decipher. The region-related stealth hacking campaigns are also confirmed by the recent leaks hit the media. The following picture was taken in Germany. A group of activists protesting against GCHQ and NSA to get their data removed from their databases. Until now, no one has real and clear idea how to target GSM and other institutions for HEC. We tried to determine possible attack scenarios and attack services. Our approach looks similar to the old school techniques of North Korea to bring them there on needs. In order to determine potential attack scenarios, we decided to perform a large-scale service animation from base stations. For these reasons, we have possibly tapped GSM communication from radio base stations. We greatly utilized Michael Osman's passive network tapping utility in our research as it is seen in the picture. We tried to collect as much information as possible from different end points of 2G, 3G and LTE communication. It also included possible management services that were reachable from base stations and to network switches. So what we have discovered during the research from the BTS was the absence of physical intrusion system. I'm specifically talking about signaling intrusion, whether wire tapping happens or not. We also discovered that devices can be altered or changed. Most GSM companies don't even take into consideration if somebody infiltrates into a base station. We discovered vulnerable services running accessible from BTS, including management interfaces with default password, public and private keys, and absence of temporary resistance and unauthorized access protection. Well, the network tapping shouldn't have been possible in that case. There was a big segmentation issue and inner non-rotable segments were reachable from the BTS. Our conclusion and experiment revealed that it was possible to exploit network subsystems from the core GPRS unit from the BTS. Since base stations are one of the most altered GSM network components, we wanted to see whether it is possible to attack other components. They store juice information such as authentication controller, HLR and VLR. If you ever perform similar assessment, you would be surprised where you can get in from the GSM network, where you can get in from the radio stations, especially if the segmentations are not correctly implemented. That's what we experienced. Let's take a look on network components that could be targeted remotely. GRX is GPRS Roaming Exchange. It acts as a hub for GPRS connections from roaming users, removing the need of dedicated links between each GPRS service provider. It's a network consisted of peering interconnects. The main GRX gateways are located for Europe in Amsterdam and for Asia in Singapore. Essentially, when you travel abroad, your phone works communicated through your provider at home through this infrastructure. GSM Roaming Exchange interconnects networks. Your local GSM provided a broad, trusted, highly interconnected, made for internet sharing. A failure or malicious activity would affect multiple users and multiple machines. Multiple attack vectors are available, not limited to the particular segment or where you are originating from. GPRS tunneling protocol is a group of IP-based communications protocols used to carry a general packet radio service within the GSM, UMTS and LTE networks. GTP can be decomposed into separate protocols, GTBC, GTPU and GTP prime. GTPC used within the GPRS core network for signaling between GPRS support nodes. GTPU is used for carrying user data within the GPRS core network between radio network and the core network. GTP prime uses the same message structure as GTPC and GTPU but has an independent function. GTP can be used within with UDP or TCP. UDP is either recommended or mandatory. One of the most important features of GTP tunnels is that DNS on GRX is used for resolving APNs to set up GTP tunnel. Access point is the name of gateway between a GPRS 3G and 4G mobile network and another computer network for kind of access to the public internet. In the following network capture, standard GTP packet transmit a lot of juicy information such as IMSI, subscriber network, tunnel endpoint. This might be also useful to correlate a person and his or her activities to the rest of the world if you have enough information. And TID is a field to present a GTPC header using the tunnel identified. If TID is a fully qualified tunnel endpoint identifier, if you are not familiar with all these GRX networks and protocol details, you can imagine if your carrier read these settings on your phone, cell phone once you set up and then does things to determine the correct IP to connect to secure gateway and see you need a private network like a VPN. And according to the former head of the national security agency, Michael Hayden, he agreed with the idea that metadata, the information collected by the NSA about phone calls and other communication that doesn't include the actual content. Can't tell the government everything about anyone. It is targeting for surveillance of when making the actual content. Communication is unnecessary. And advances it in machine learning and artificial intelligence make it possible to predict potential human behavior if the enough data is provided. What is SS7 is, the communication common channel signaling systems that transport SS7 messages over an SS7 network which is developed in 1970s. And this implementation didn't introduce any security features back then. Then there is a SIGTRAN. SIGTRAN is a set of protocol extension of SS7 protocol defined to transport SS7 messages over IP networks. SS7 introduces procedures for user identification, routing, billing and call management. SS7 consists of parts, message transfer parts, MTP1, MTP2, MTP3 for physical signaling, connection control. There is also signaling connection part, transport capabilities, application part, telephone user part, ISDN user part. Some of the SS7 include flow control of transmitted information, traffic congestion control, peer identity status detection, or traffic monitoring and monitoring measurement. SS7 sets global communication protocol standards that define procedures by which network elements within the public network. Next, which telephone network exchange control information or digital links for setting up, managing and tearing down wireless calls. Since SS7 is not application specific and works over IP, it enables multiple network elements to work together. There are various tools available for SS7, experimentations. One of them is SS7 analysis tool that I use during my research. As part of region malware attack surface analysis, we performed a network traffic analysis over SS7 protocol. The experimentation revealed that it was possible to extract many so to say juicy information such as call or no, call in no, information related to their call, such as call duration, call start and times, and call status. Please remember that these are all information, so-called metadata, for which people are being killed. Let's browse through some of the attack scenarios over SS7 protocol. When a subscriber registers on a switch, the subscriber profile is copied from HLR to VLR database, from HLR to the VLR database, assuming an attacker managed to make changes in VLR database. He can change parameters and fake subscriber info so that victim can be redirected to a conference call to call each time when a callmate. So the attacker can simply record and listen to the call, passively while a caller, assuming that he is directly communicating a colleague. By introducing a decoy VLR unit to the SS7 network, an attacker can intercept SMS messages, send drop a confirmation that the message was received to the recipient. If the victim used mobile banking or another service that used one-time SMS pads, then he can recover or steal these passwords to make money transfer or do the takeover internet accounts. Furthermore, the following attack scenarios are also possible by manipulating VLR and HLR units within the SS7 network, intercepting of SMS messages, intercepting of outgoing calls, redirecting incoming calls or outgoing calls, making changes in user bills and balance. Recently, Nokia researchers discovered that it is possible to unblock cell phones by exploiting a trust relationship between EMAE and IMC for equipment identity register access. Equipment identity register simply checks whether controller unit returns zero in map check EMAE structure and then treat phone as if white listed. So you can read more about this attack scenario as the research release for public and available on the internet. An interesting mail log revealed the hacking team, which is known for selling weaponized surveillance tools to the oppressive governments are also interested in exploiting SS7 for user location tracking. It is technically true that as seen on the circle, the location of mobile phone could be obtained during the time of co-order. Rootkits can be analyzed in two categories, kernel and user-land rootkits. When we say user or ring-to-rootkits, we are referring executables that require least system privileges. However, kernel rootkits in other words ring zero, rootkits run highly privileged level of operating system. These are also known as device drivers. Hooking is a technique hijacking a function or a system call and changing execution flow in a way that attacker can modify, change, or obtain what is being sent. We can think as if a pirate hijacks a ship where they steal treasure and let a ship continue to its destination if they wish. Hooking techniques used by the malicious applications to monitor user actions or application behavior as well as legitimate applications for the same purpose. For instance, antivirus applications use hooking techniques to detect malicious behavior in the system such as key logging, back tours, and etc. The same applies to the malicious applications and boxes as well. Which also uses these techniques for malicious or hiding their activities within the system they infected. Most commonly known techniques for user-land applications are import address table hooking, DLL injections, and inline hooking. More advanced kernel rootkits, user or other system service table descriptor hooking, input output request hooking, interrupt address table descriptor and global address descriptor table hooking, and lastly C-center hooking. These techniques are widely documented and available on internet, so we will not delve into these. I advise you to take a look on these on internet and read it. Regine is a very complex multiple platform rootkit that consisted of dropper module and multi-stage user and kernel level components. Each level seemed responsible of decrypting and loading modules into the memory and executing them. The most interesting feature of Regine platform was that it used orchestrator and it was very new to the researchers until the date of discovery. It can be thought as a RPC called to specific kernel driver to enable and activate them. If then the payloads that utilize for the actually performing malicious action in the system, one of the most interesting malicious action was to be able to... was employed by the Regine rootkit to be able to monitor GSM network base station commands as you may have read on the antivirus reports. So what were the challenges and the hurdles of the research? No one had the dropper module started at the date of the analysis. It was multi-stage and encrypted very complex modular framework. Modules are invoked by RPC calls by the framework. Malware data are stored inside the virtual file system and also the encryption type was RC5. Not commonly seen in malware implementation up till that time. And research GSM network had no indication of compromise. So we came up with a solution and how to solve this problem. The best way to start was to reverse the encryption brain of the framework, the orchestrator, and accompanied them with memory dumps of infected systems in collaboration to the different researchers and instrument calls to do dynamic analysis in that way. A similar research has done by a Russian researcher and it has more detailed explanation on the given link. So if we delve into the region framework stages, the stage 1, 2, and 3 of the region platform were solely responsible of description of preceding stages. Stage 1 is simply prepared the execution of stage 2, which is developed as a kernel module. In the snippet, stage 1 simply uses memory and the kernel calls to allocate memory pool for stage 2. Consequently, it makes sense since the preceding module is a kernel driver. Stage 2 had a configuration block contains the names of two system directory that hold the encrypted tors stage in their extended attributes. So if the stage 3 sends a signal, stage 2 can relate the start of code of the region that would make the detection much more harder. And the second stage also creates a marker file that can be used to identify infected machine. It can simply create previous stages and create an encrypted file, virtual file system. This is for kernel payloads. So actually the kernel payloads are stored in a file container, virtual file container. So stage 3 is kernel driver, manager, and stage 4 is the brain of the region framework that is orchestrator responsible of loading kernel modules. Key stuck attached process and interesting when API call that routine attached current thread to the address space of target process that we observed analyzing region platform. Since our goal was to reproduce and simulate the region functionalities and instrument payloads, I created a simple routine that simply attaches to an application and intercept system calls and changes them in a way that we wish. This is very simple code snippet showing how to obtain process environment block of a process and then obtain base addresses of modules and attaching a thread to it. This may not be very concrete comparison among Yuroboros, Regine, Dugu 2, but Dugu 2 seems to be most complex root discovered until now. And also modules and the approach how to operate malicious actions on the targeted system changed over the time. For instance Yuroboros back in the past bypassing patch guard, but Regine and Dugu 2 were using stolen and legitimate certificates. In order to simulate region behavior, I created a small framework consisting of two modules, kernel and user level module and it simulates orchestrator behavior that is implemented in the similar way implemented in the region framework. And the features of the simulator include covered data exfiltration, running the threat of the legitimate application address space and totally make it invisible to the users. Orchestrator simulator and partial RPC calls, same as in the region framework. Monitor of file system registry and network call, who king, vector and K-logger module. Pretty much the malware that you can see in ransomware, crypto-logger and etc. So time for demo. I hope it won't fail. So it's a Windows 7 box. I tried implemented for both 64-bit and 32-bit. I'm going to show you the 32-bit. Everything written in CC++ and Windows development kit, driver development kit. I have two modules which is executable here, user land and kernel driver. And this is a batch file simply invokes to, I will show the content of the batch file. This simply invokes to executable with hard-coder patch and invokes the system controller and executes as a file system kernel driver and then starts this and then executes to user land of the framework. So since it is not weaponized with any zero day or packed within an executable, a simple executable just for demonstration purposes, I made it very simple. What I'm going to do is I will just copy these three to Windows directory. So it's a normal user. I run a privileged user and then, yes. This is the content of Infector. I will simply run it. So what it does is simply, it runs. And right now, system infected likewise in region malware. And I will show you. Since the malware is implemented by simple SSD calls right now, the hooking type. So it can be clearly seen that these are hooks, kernel level hooks that intercepts and changes behavior in the system. And this is the unprivileged user. I have a client, so I can remotely connect and depending on firewall configuration, I could change the port as well. But for the sake of simplicity, I will show here the capabilities of the client. It can connect to the infected machine by simply providing IP address. I hard coded the port number for the sake of simplicity. So it connects. It gives a shell. You can just run some malicious activities on the system. And by the way, this is totally updated malware antivirus. And it cannot detect the execution of it because it intercepts some calls and hides itself from the antivirus. And I want to just demonstrate to some of the simple behavior client commands I can send to system. I can run executable on the system, invoke some file, open it. I can encrypt entire disk and I can kill the system by simply writing some changes in the file structure of the system. For example, let me demonstrate. Like this. I think I don't have multiple. Maybe I can demonstrate to some malicious. I can successfully overwritten the disk. And I try to restart the system. Shouldn't able to find the operating system? Yeah, this is the bottle if you can see. I think that's it. Do we have any questions? All right. Thank you very much. Thank you. Thank you. Thank you.
The recent research in malware analysis suggests state actors allegedly use cyber espionage campaigns against GSM networks. Analysis of state-sponsored malwares such as Flame, Duqu, Uruborus and the Regin revealed that these were designed to sustain long-term intelligence-gathering operations by remaining under the radar. Antivirus companies made a great job in revealing technical details of the attack campaigns, however, they have almost exclusively focused on the executables or the memory dump of the infected systems - the research hasn't been simulated in a real environment. In this talk, we are going to break down the Regin framework stages from a reverse engineering perspective - kernel driver infection scheme, virtual file system and its encryption scheme, kernel mode manager- while analyzing its behaviors on a GSM network and making technical comparison of its counterparts - such as TDL4, Uruborus, Duqu2.
10.5446/18848 (DOI)
Welcome ladies and gentlemen. My name is Norbert Ianni and I was working for the Hungarian SA more than seven years and I was responsible to make penetration testing and trip analysis and we do some some researches relating to prime number theory and right now I'm a senior secretary advisor at a Hungarian company. As you can see my t-shirt this is a green t-shirt. As you know we didn't have such kind of cool t-shirt in the Hungarian SA so maybe this was the first reason why I just left the company. In this presentation I'm just would like to show you how to hide some backdoors and how to make some malicious code or random number generations methods and how to identify these codes and after that how to implement in practice these backdoors and applications on prime number generation methods and in backdoors of random number generations. The general background of the research can be shown in these slides and the main concept of this presentation is based on the following researches that you can see in the slides. Sight-tune-up, cryptography, pseudo-random generation and this presentation also presented in the Central European Conference on Cryptology in Budapest. We went into the Mathematica background and right now we implemented the code and right now we would like to show you how to implement in the real practice and how to make some fake certificate how to implement some sophisticated backdoors or random number generators and this research also based on the fourth injection based backdoors in pseudo-random number generators. This research started within the umbrella of the Hungarian NSA and right now we try to try to implement all of the codes and this is the main reason we would like to show you how you can do it and after the presentation you can also get the source code of these algorithms and implementations. As you know OpenSSL there are numerous open source applications that are using OpenSSL library and you know for example OpenSSH age or curl, stunnel and all of these applications is very important to emphasize that these applications are based on the secret of OpenSSL so it means if you can make some backdoors in this library or you can you can implement new techniques to this OpenSSL library after that there is a chance to hide some backdoors. Of course it's not so easy because as you know there are you know there are some checksums on the library and there are many many differences how to prevent this kind of malicious activity. So the main question is as I'm trying to try to show you in my presentation that is it possible to hide the backdoor on a well-known algorithm. For example everybody knows that RSA or AES is well known for security experts so it means that if you would like to make some some white box with it for example in AES codes if you know how does it work if you know what the algorithm is doing and you have some test vectors it's very easy to identify the backdoors for example in an encryption algorithm. It means as you can see in the slides for example you have encryption key you have an initiation vector you have test vector and you have a cipher text and if you know how AES algorithm is working on you can make some tests. There is an input there is an output it's working or not if the output is different than the sample or the text then it means that maybe there is some backdoors on the AES algorithm or there is some programming mistakes on the algorithm. So it means such kind of test vectors are proof of concepts that modification have been made for example on the algorithm and it's very difficult to hide a well-known backdoor in AES RSA or well known encryption algorithm. So this was the main idea that modification an encryption algorithm is lead to a non-sophisticated backdoor. So what about random number generators? As you know there are no test vectors. It means because the behavior of the random number generators is to produce true random numbers. Of course if you are using only softwares not hardware if so no random number generators but of course you can try to make very variable random numbers. However you have to use some statistical tests like the National Institute standard of technology tests or the die-hard tests in order to modify the output or in order to test the output. If you would like to modify there is no test vectors it means as I mentioned in the previous slides. If you are using AES there are test vectors and it's working or not. In the random number generator methods because there are no test vectors you cannot identify the malicious code on the random number generators. Just figure out that's imaging that there is an embedded system there are some hardware random number generation on the embedded system and you don't know the source code. You know only the output of the random number generator and if this random number generator passes all statistical tests like the NIST test or die-hard test it should be good random number generator and you cannot do nothing with this. So as you can see the main idea how to modify pseudo random number generator for example in the open SSL library in such a way that the modified output still passes all statistical tests. I mean if I'm modifying the random number generator it should pass all statistical tests. If not of course it's cryptographically not secure. So this was the main idea and we tried to make some mathematical background relating to this topic and we try to try to implement in the practice. So the first one is just presented in some conferences and there is a term that how to use this kind of mathematics relating to number towering to hide some backdoors in pseudo random number generator and after that you cannot identify the the backdoors in the random number generator. So as you can see in the slide that let me define a pseudo random number generator and let BI denote an IBitLens binary sequence and let us define the following function as you can see. Fee BI and I wouldn't like to go into details but it's very important to relate into this topic that's there is a term that's there exists a polynomial time P algorithm with two of m periods and fee P BI EGAS 0 it means and that's observing only two m minus one and consecutive bits from the output of the pseudo random number generator it is possible to predict the next bit with 100% of success. It's very important the next bit and 100% of success. So it's it's very easy to create such kind of algorithm. In the slides there is a proof of concept concept algorithms relating to the AES it's called C self encryption AES and this is a POC proof of concept algorithm and as you can see in the slides here is a seed he's an AS encryption it's a combination of the counter mode and the CBC mode as you can see in the slides so the output is shifted back and there is a new input for the next block. As you can see that's I will call capital C because there is also a CI with this one so capital C is this one and the C is only this one so capital CI plus one you got to the encryption key of CI and it's as you can see or not see here is the next one block and there is no loop in the algorithm so it means if I'm using this kind of cycle and using this kind of counter mode it can be proved that the size and length of the self encryption AS is minus two of 128 bits and there is no loop in the counter mode in such a way as you know AES is designed in such a way that try to pass all of known well known statistical tests like the National Institute standard of technology test the NIST test or the diehard test so it means if you are encrypting pseudo random number generator output in such a way the output is always predictable and the output cannot be distinguished from true random numbers only by statistical tests so if you know the source code if you are know how this algorithm is working you can predict the next bit with 100% of success however with only statistical test I have to emphasize that only with statistical test you cannot distinguish from two random numbers and it's very interesting relating to the proof of this proof of concept algorithm that there is a very important corollary as you can see in these slides so if you would like to observing how many bits you have to observe in order to reconstruct the internal seed of the random number generator you have to observe only 383 bits so it's not too much to reconstruct the next bit so it means as you can see in this proof that consecutive capital CI and CI plus 1 is 256 bits of course always can be found in 383 bits if you just would like to to to image in this scannario there is a slide there is a picture and you can check that's 258 bits always can be found only 383 bits relating to this one it's only just a proof of concept code so if someone tried to implement these codes in open SSL library it's not as sophisticated because if you have a white box on it and there is a security expert and the security expert just try to check these combination of counter mond and cbc mode of the AS algorithm he will definitely identify the malicious activity in this encryption method however it's very interesting that we just make some malware to modify the open SSL library and try to modify the internal state of the prime number generation as you know if I'm trying to modify the prime number generation for example in open SSL library after that there are some statistical tests that have to pass the output so it means if I'm doing some bad prime numbers and so on and so on after that they really won't qualify and really don't want to pass his or statistical testing open SSL library so you have to be very careful when you are modifying prime numbers generation method of the open SSL library in this algorithm you can see that it's a very interesting algorithm that's you are just searching prime numbers less than the binary lengths of the number is 1024 so it means you just multiply together prime numbers less than 100,000 and as maybe some of you heard about that's in 1947 John Polard proved that is it possible to factorize very large numbers in polynomial time if for example if it's prime factors prime P-1 or P-1 has only small prime factors small prime factors means for example less than 100,000 as we see in the previous section in this slide so it means if somebody would like to make such kind of sophisticated backdoor rating to prime number generation they can use for example these techniques to to generate some prime numbers and after that we can check that how many how many thank you okay so after that you just generated some prime numbers you can test with NIST test and other sophisticated statistical test and POS is all well-known statistical test it means any method that's implemented in open-essertial library will not identify that any malicious background activity is going on in the code in this slide you can see that just one you can see a public key just we generated a public key with all modified open-essertial library and after that we just published the public key to everybody and as you know it's a 2048 it's public key and so it's almost infeasible to to factorize however if you are using such kind of backdoors or you are if you are using such kind of malicious random number generators is it possible to reconstruct its prime numbers there is a mistyping this slide can you see it this is a public key of the previous public key this is the decimal number of the n modulus from this public key of course it is a base 64 encoded version of the public key and this is the n modulus decimal number this is a 2048 bits number and we just generated in such a way as I mentioned to you with John Pollard techniques because we would like to reconstruct the private key from the public key so I would like to emphasize that after that is it possible to reconstruct the private key from the public key and of course if you are using only statistical test you cannot find any malicious activity only in the numbers because there are no such kind of activities going on if you know you can also check on the internet that Polar's P-1 algorithm pseudo code as you can see here the first step is threat as smooth as bound be I really wouldn't like to get into details how to select these bound but after that there is a very important part of the algorithm this one that the algorithm is running till the time limit is not reached so it means have you ever tried to factorize a very large number for example if it's large number it's more than 2000 bits it means you try to factorize it and your computer is not working there is no result nor is nor is that there is no result so it means you cannot factorize it however as I mentioned to you and this number is it possible to factorize this computer is factorized approximately eight minutes and is it possible to reconstruct p and q prime numbers and if I'm choosing the smoothness be number larger than 100,000 for example 1 million 10 million 100 million then it means the factorization time is not eight minutes or not five minutes one hour one day one week so it means if you are trying to to factorize a large number and there is no result you don't know why is there is no result in the output because this is a very secure huge number or there is a very large smooth number and generated in such a way these numbers I really don't want you to to make explanation of this algorithm you can find also in the internet we just used the combination of this algorithm and the combination of the previous mentioned algorithm self encryption AES to produce sophisticated and very good quality random numbers in OpenSSL library. Of course relating to the huge decimal number between one minutes corey seven laptop would found all of the prime factors of the huge numbers so you can find p and q less than within one minute so it means that is it possible to reconstruct from the public key the private key and it's of course everybody knows that is a malicious activity can be used in embedded system in web pages or when you are using some age or something like that. Of course just some words relating to the new randomness test that's when you are modifying the OpenSSL output that is very well known statistical test that your generated numbers have to pass so it means I'm just generating some bad numbers and they are not passes all statistical tests from the National Institute standard of technology test it means that these numbers are failed so you cannot use for cryptographic purposes it's very well known. However we tested all of these all of the 15 and 16 and 17 tests it's there are some some extended tests in the niche test so all of the 15 tests pass the output of the random number generation and after that we tried to modify some other numbers. We tried different approaches also maybe you have ever about sharkism of the theoretical tests we also applies the well distribution this one and we also applied correlation measure this one and after that we didn't identify any differences between two random numbers and between the generated random numbers so we didn't find any statistical test that can distinguish these bad numbers from good numbers. So after the statistical test we try to implement risk an aerial and we try to we try to make some some implementation of the code so we just you can see the vkssl.com and we just requested this domain and we generated CSR and sent to Commodore that we would like to HTTPS certification you know we would like to use a second communication and we just request a certificate and certificate issued by Commodore secuCA and this is a 2048 bits encryption rating to the RSA as you know and it's very interesting that if you're a CA it's not possible to identify any malicious activity in such kind of numbers because you don't have enough researches to make factorization to find bad behaviors in such kind of numbers so it means after that they approved and issued the certificates we just deployed to vkssl.com you can check on your mobile phone also vkssl.com is active and it's working of course you can check some SSL uptests you maybe you know it that's we try to configure the server very securely it means a plus one you know it's everything is enabled on the server so it's very secure it seems to very secure as you can see TLS fiber core so it's enabled and HTTPS forward secrecy is also enabled on the server so rating to the security certificates everything is turned on and the server is seems very secure rating to the certification and also certified by the Commodore. As you can see in these slides just using some very basic comments in Linux for example OpenSSL, ash client connects you can just download the certificate from the server it's very easy you can download you can try it and after that you can write out the modulus and you can write out from the certificates which is the public key modulus this is the huge number and the next step is try to factorize this number and we've written the source code in Parigipi it's a computer algorithm system it's very easy to use every function you have to find in relating to number theory you can very easy write some programs relating to this topic and you can find some very cute stuff in this program as you can see in the presentation and this this slide here is the modulus this is the large 2048 this number and you have to factorize it of course if you try to factorize normal number it take more than a million years in a single computer so it you cannot do this however this method is generated by our sophisticated modified OpenSSL library and after that this is the polar p-1 algorithm invited by John Pollard in 1974 just we wrote this program in Parigipi you can see it's very short one and we just try to try to factorize the huge number this one and as you can see it takes one minutes and 31 seconds within two minutes we were able to to factorize the huge number and the certificate and it means that is it possible to reconstruct from public key the private key and if you see that just for example like in vk.shr.com that everything is secure the certificate is certified by a very well-known CA and everything is configured well maybe there is some some problem with the certificate also because you can you can you can make some such kind of bad prime numbers that can be used in such kind of certificates. Summarizing the research it's very important to emphasize that kleptography this is a so-called so theory of cryptography when you try to hide information with cryptographic methods and try to steal information for example from companies with cryptographic methods it's very well known nowadays and we just modified OpenSSL library however you can use this kind of methods in embedded system and it's very important to know that when you are using some kleptographic methods and there is a white box on it a security expert won't identify any malicious activity for example in a source code and of course if you know that such kind of prime number generation exist and you will see the source code you will know that there is some problem with the prime numbers however if you know only the outputs for example the certificate like in vkssl.com you don't have any clue how to factorize it you don't have any clue how the prime numbers p and q generated so it means you can make some backdoors you can implement some backdoors in such kind of applications like SSH or certificates. This is a working member on Linux it means that you can find the binaries of the OpenSSL and is it possible to to modify the binaries and after that the prime number generation is malicious and of course as I mentioned to you it's very important to emphasize that it is not possible to distinguish my statistical tests so many many random number hardware random number generation or software random number generation generator try to try to approve in such a way that they try to analyze the output in this way if you're trying to analyze the output it's not possible to distinguish from other true random number generators. Okay thank you for your attention if you have any question relating to random number generation or relating to the source code I'm ready to answer it or we can also send to you in email to you to try it or you can also try it to to softwareansweekhssh.com or delizen.shh this is also weak in weakhssh.com thank you very much. Does anybody have any questions that like to ask we have a few ministers there. Come on. Come on. You must be very clear. Nothing. Ah that's one gentleman. Sure question. How do you study the topic of manipulate the ECC generation through this thing? Yeah you mean some problems relating to advanced know them and some some elliptic curve generation methods. Yes as you know reacting to cryptography many huge companies like NSA or or or Hulet Park arts or something like that's very huge companies are using such kind of backdoors and it's very interesting because when you are when you are identifying a backdoors relating to kleptography you don't know it's it's really a backdoor or is just some mistake in the mathematics programming. When I checked relating to elliptic curve they are using that not the same techniques but some kleptographic techniques how to hide some secret parameters relating to the elliptic curve and after that they try to try to how can I say you know when you are generating prime numbers we have some statistical prime numbers generator it means there is a chance that is not a prime number but it's very highly likely that is a prime number like million ruby interest or something like that and when they are just generating some elliptic curve they just told everybody that it's 100% secure and it's 100% there is a little chance that is there is not on the curve and something like that and after that some years ago it's they identified that it's there is some problem with with the curves and this is the same techniques relating to these other number theoretic groups like elliptic curves so they are using maybe a more sophisticated backdoors and it's you don't have such kind of research is you don't have many super computers to analyze that is really a prime number this is really on the curve or something like that and of course they have that research and they can make such kind of sophisticated backdoors as I mentioned to you my computer takes eight minutes to reconstruct from the public key the private key it means I can choose a larger number and it takes eight days so when you are trying to factorize after one day after two days you just finish it okay it's a very secure number however if you try to run more than eight days you will find the factors so this is the same if you don't have enough research is to find some curves on the elliptic curves and find some secret parameters you think that everything is secure on this random number generation however after that's everybody try to find these secret parameters and everybody try to calculate everything after that they found and realized that there is some problem with the number generator okay thanks very much thank you very much
Random numbers are very important in many fields of computer science, especially in cryptography. One of the most important usages of pseudorandom number generators (PRNG) are is key generation methods for cryptographic purposes. In this presentation a modification of the prime generation method of the OpenSSL library will be presented. The modified version of the library passes every well-known statistical tests (e.g NIST test, DIEHARD test), however while an adversary is still able to reconstruct the prime numbers (P,Q) from the public key. The method can be used for malicious purposes as a sophisticated backdoor. The presented research is based on the theory of kleptography and a recently published research paper.
10.5446/18846 (DOI)
It's not actually, I did attend a lot of conferences but I'm not bashing developers. I have been a developer myself and actually this talk came up because I really got very, very crumpy. Every time everybody is bashing the developers. They do it all wrong, they can't see it and I got fed up with it because I was a developer. And my heart is still of developer and I had to split between developing and security, invented security, but my heart is still for the developers. And then when you see it bashed, it hurts. When I'm hurt, I'm groundless. So I was in the conferences with all the people who risk officers in nice suits and I'm the long-haired guy who's a beard and a hoodie. I sit next to me like decent people and then it starts bashing developers and I start getting angry. I go, oh, okay, see me again. But that's what I talk about because if there wouldn't be no developers, we would have not, a lot of stuff we wouldn't have. So actually my ancestry or where I come from is completely different. I don't have an IT history when I went to IT. I didn't study. I quit school when I was 17, had other things to do and I was a trained mechanic for injection modelling. In injection modelling, it's a very heavy iron mould stick with clothes by the machine. Liquid plastic will be injected with cools. It opens. The parts will be retrieved by a PLC programmed robot. Sometimes even there will be inlays by the robot. Robot arm removed from the mould, mould closes and again plastic will be changed in. Because it's a mass production, timing is very expensive. So the faster I can produce, the cheaper the products are. So I was a mechanic, like a grease monkey, oily hands. And it was really great because we did soul lane, we did electronics, we did mathematics, hydraulics and there were those robots. And I like when things like moving by themselves apparently. So I like the robots where we quickly become responsible to keep the robots running. But when you have a timing like that, there's a problem because you want to save time so you send the robot arm towards the mould before the mould has opened. And you're closing the mould before the robot arm has left the mould. So there's a slight problem, timing. What if? So what if the mould is not opening? The robotic arm crashed into the mould. Robotic arm broken. Not bad. Expensive but okay. It's really funny and impressive if the mould closing and the robot arm is still in there. And they have a different cost. But then we had a developer. He was the developer. So I was the creased monkey and he was the developer. So every time the robotic arm was creased by the mould, he came down and like blaming the user, what have you done this time? I couldn't stand that. It should not be possible. I like logic and it should not be possible that this happens. So what I did, I learned myself PLT programming. I downloaded the letter diagrams and were decopiling it and reading it and I found a problem. A very stupid problem. And I felt great because hey, I found the weak spot. The most good was not work. Okay. I found the weak spot. I was the hero. I was cool. So I went down there, up there to the office because we were down there in the production area. I came there with this printed out a little diagram and said, I found your fault. You did it wrong. Look at me. Was he happy? We never became really good friends. And that's somehow how security people think. I found this weakness. I broke your stuff. I am so smart. I am cooler than you. I am smarter than you because I can break your stuff. But we forget about the complexity of systems. When you write complex systems, sometimes you forget about the obvious. And you think, how can they not see this? How can this happen? And it becomes a little complexity. It's very too complex to write a good code. If it's robotics, many parts moving at the same time. If it's an application. So when you do that much complexity, sometimes you forget to look at the detail. Or it's just you don't find it until it appears. You're not trained. So bashing developers for, oh, I'm smarter because I break your stuff. Can you build it yourself? Many times I've been security conferences. And there was another cool hack. I said, oh, I'm so cool. I break stuff. And here I wrote a nice Python script. You can download it. Don't look at the code because I can't code. I was like, oh, no. So I can't do what you do. But I'm smarter because I can break it. That was the same thing I did in my first code review with PLC programming. I cannot program PLCs, but I could spot the error. So sometimes I see people like, ah, I miss you. How can they not do it? The problem actually is education. That's why I love a SCAF project. They're not trained well. Nobody tells them until, hey, I broke it. I own you. You're a pawn. But think about the mindset of people. Think how people's minds work. This photograph I took in Iceland, where there are much less bicycles than the Netherlands. And the only bicycle I spotted was at a place where it says, no bicycles here. It's not always obvious. Your mind works differently. It's like, don't think about a red elephant. Sometimes by not saying something, you just assume it. And to train the developers, that's the thing. Just come there. I broke it. Here it's broken. And not be able to say how to fix it will not help them. So here they are, the really, really cool hackers. I really got disappointed by the cool hackers, because they're like, oh, I am so cool. It should be animated, but it doesn't work on it. So I am so cool. Look at all the weapons I have to my proposals to my availability. I am really cool. I will get big guns and shoot at your application. So hey, I'm a hacker. I have Carly installed. I use Smith-Abloide. Hey, Amitage, everybody can hack now. Actually, I had one customer where a season of another company used Amitage against their server on Hail Mary without an assignment. How can you do that? It's really, really bad. So trust having the tools that doesn't make you a real hacker. And even I was a project lead over CTF project. I see a lot of people like, oh, I'm a cool hacker, I do CTFs and I am a three-time prize winner. Still, they are not very good by default, security professional. Because in a CTF challenge, hey, every challenge has a vulnerability, otherwise it would not be a challenge. And all you need to do is find the spot, get your token, upload it. You don't have to understand the vulnerability. You just have to exploit it. Amidication? No, I don't think about it. And most importantly, there's no writer. So I need the token, I got my points. So that's a lot of hackers I see. So you are the cool hacker. Like, oh, I'm cool. But is it like that? It's like, yaw. If ever I don't wait for you, you have the cool story. To wait for you to help them. Baking stuff only meets you at the right tool. Maybe not for them. And maybe I'm grumpy or I'm getting old. I don't know what it is. So I teach at the universities, and there's little, like, the generation gap gets bigger and bigger. And it's very hard to find developers doing security. And people actually in the Netherlands, the pond is empty. There are no developers doing security stuff. I said, of course not. What do developers do? Developing. And I see that developers in the Netherlands, they had a big problem. Because you are a union developer for half a year, then become a union developer for two years, and then you have to be a senior developer, otherwise you are slow. And then you're a senior developer. What is your career path? Being more senior developer? No, then you have to be an architect or a manager because then you get more money. So we are killing so much knowledge because there's no career path to developers. You're a senior developer and then move on, otherwise that's your life. So appreciating developers, being good and doing that for years is something that misses. And for security tests, it's even worse because, hey, I'm the cool guy with all the nice tools and I hack stuff. But experience, they lack. But a lot of young people lack, and it's normal, and it should be like that. It's understanding of communication. Like, we are in the IT world, we're not the best people to talk. Be honest. We are very confident in front of our computer, somewhere in the basement, no people around us and me in the machine. But we have to learn to talk. That's one thing. But we have to learn to talk to other people, with other mindsets, you have no understanding. So they come with a checklist. This is all broken. Ha! So, I'm a developer, what do I do with this? Yeah, good. It's broken. I get a report saying, oh, this is all bad. I'm meeting the report, what is good? What have you done? What didn't you do? I see a moment in the report, and it's something in the Netherlands, functional testing is really on a high standard. We think, oh, functional testers are not technical. And definitely they're not doing security. But they're really good at estimating how much they test it on an application. When I have a security test, there's always time box, because security is expensive. So we do a black box test, one week. For one week, black box test, how much testing do you get? Because there's budget intake, half day gone. Reporting, one half day is gone. So we have three days left. Ooh, what do we do in three days? Running tools, isn't it? And then what do you do? If you have time, ah, let's find the false positives and skip them, and then you get a report. So when I receive a report, I almost never get the whole intermediate files. Because I want to see what did you do, what you haven't done. So if a developer then thinks, hey, let's try it myself, I get the same tool, the same version, I run the same test, and the hacker only saw it, those are the things you have to fix, and don't say, hey, those are the false positives, the ill-weller-than-findings, I get a difference in findings. So what happens to those? Have you seen them? Have you ever read them? So in the report, also should say, those findings has been reported by that tool, but they were not respondent with issues. I missed that. And we get a top secret PDF report. Go, what can I do with a report? I can file it and bin. Because yes, I have a link with the payload. So I can log on to my application, because then the link does not work. So I have to click the link and hope the application is the same state so the payload works, then I can see what is broken. And I get a general description of what should be fixed. Does it help me as developer? Not really. I need a description. I need what has been done. Understand. Not if the payloads don't work. Why? It's a dynamic application. So if the payload works, many times it's depending on the state of the dynamic application. So on the PDF? Yeah, thanks. It's for management. Nice dashboard. So there, help me. Enlighten me. Talk to me. Because when you get a report, I get in line. What are we paying our developers for? To develop. But they are bash because developers have not much friends. Who are you as a developer? Pretty proud of that. I see somebody shingling on this. Yes, I am a developer. Be proud to be a developer, because you make people proud of what they do. They do a better job. If it's a developer, if it's a project manager, it's a toilet lady. The moment I appreciate them, they will care about the work. That's what you need. People do care about what they do. And good developers care about the work. But they are bash so many times. Why? Because all the different priorities. The users don't like developers. Like, oh, look at this. How could I have done this forever? It has to be more intuitive. It has to be more shiny, flashy application. It has to be delivered yesterday. So they're never good. And they are too expensive. You will be outsourced too far away or near shortly. They're happy with developers. But still, they rely on them. So that's great. Our blood and sweat and tears is in our products. When you get a thing, you get bashed. You're out of time. I'm out of time. Because we sell the project. You said you need three weeks. We sold it for two weeks. Please make it happen. I had a sales guy in my previous company when I was a software architect. I was a developer. They estimate how many hours you think this application will take to complete. So I gave it back to them. He comes back to me. He's like, sorry, it has to be 250 hours less. Not very easy. OK, strive something through. Here you go. You save 250 hours. So he was happy and walked away. But then he realized he talks to me. What did he just do? You have no web interface. He said, it's a web application. Oh, yes, it's a web application. You cannot do this. Why? You got my judgment. That's what you asked to. Now you say to me, it's not good. OK, I trusted it. And people are like, oh, I will talk to you manager. OK, have fun. He knows me. I appreciate what you do and accept their experience. So they have to continue delivery. They work hard. We do DevOps. It's really good. So first we learn to talk to testers. So the development testers understood, testers are people. We can talk to them. And now we have the operations even earlier. That's what's great. Because we are developing stuff, been tested by testers, and then we throw it down the basement to operations. So now operations are integrated. So we have continuous delivery. I hear companies like 70 release a day. Then come security. Stop. We first do a code review, then we do a test, then we do a test. And then you get a proof and then you can release. It does not work. Early integration. And it does not work. A, for it holds the whole chain. And B, you are a developer. You are in your code, in your head, in your code, the function that you develop now. And then something comes by. You know, three months ago, you developed this piece of code, it's no good. I lost my whole thread on things I was working now. I had to think three months ago. What did I do then? I have a very rare perceptive time. It has been, it is, it will be. I have no idea where I've been three weeks ago. I know where I've been yesterday, I guess. But when you're developing, the complexity of the code back for three weeks ago. And then you have to fix something. It not only takes the time to fix, it also takes you out of the current process and you have to get all the way back in the old process and think about how to fix it. So security should not be like the ministry, which no, that's not here many times. Like, come to companies and help them with implementing security thermal life cycles. Early response, they are like, oh, you do security. Why? Because, oh, you will still say no. No, I'm here for you. When I do code reviews, I always blame, I have to be talking to the team. And help managers are like, no, no, we just want a code review. You don't have to talk to the team. Say, okay, I don't do that. Why are you not doing that? I don't do a code review against the team. I do it together with the team. Who knows the code best? Who knows the code better than the developer? Come on. It's their code. It's their code. So we fail. The way we approach them, security fails, the way how to deliver the reports to them, so let's think about how we can improve it. Talk to them. I know it's very scary to talk to other people. I was not a good talker before. But I got passionate about security. I got passionate about development. What's hard is full of, it comes out of the mouth, what they say in the Netherlands. And I love developing. I love security. I love both. I love to talk to people. Understand each other. And don't talk always about security. Talk about things they are concerned. I did a lot of reviews, intakes, stuff like that. And when you're there, the first meeting is always with the manager. What do you hear when you talk to the professionals and the manager is there? Everything the manager wants them to say. That's not what I want to hear. Actually I think my previous company should have paid me for being a smoker. Because the moment you go outside and light a cigarette, then you hear all the frustrations because they know you understand them. You know you have been in the same place. You have been a developer. I'm not a security guy who comes to school, have tools and hack yourself. No, I have been developer. I felt the pain. So you're equal. You're out there, no manager around, lighting a cigarette, you hear all the frustrations. When they understand you feel the pain, you can be actually their crowbar to get things changed, they're called improved, they have no time internally. Because you're the external expert telling the management you have to fix this. So you're actually helping the family to improve their code. They have brains. It's just to trigger them, to trigger them in the way they understand. They have to be triggered. And that's something I really, when I got involved in OVAS in 2006, it's that long ago, I'm getting old. I really was an OVAS because OVAS was there for developers. It was not a security conference or a hacking tool. No, it was a community of developers helping each other to write secure code. And our first misstatement was really black and white. It was the finding, fighting, and preventing of unsecured code. We learned, we understood that it's not possible to make 100% unsecured code. So we changed the misstatement, I think, back in 2008 and saying from make the risk visible to the business so they can do the right decisions. But what I found in OVAS, I think, few years later, we had categorization of our project and guidelines. It was build, break, and defend. I was a developer. I was not Bob the Builder. I'm a developer. So even OVAS, coming from the development environment, I think did not really understand what developers are. You developers, are you builders? I didn't feel myself a builder. I'm a developer. It's something different. For a builder, I see somebody who put bricks on each other, not thinking. Actually, they do have to think. I'm creative. I work. So make them the heroes. And you have to help them the heroes. You always think the heroes, they have all these capabilities, the knowledge, but every hero has been born blank and he needs to gain knowledge. Make the developers use security heroes. It's all about how to present it. Lock them, care for them, love them. Don't do it with me. I don't like hugging. Don't be the guy who kicks the child. Don't be the bully. I said developers' blood, sweat, and tears, the passion, it's in the, if you appreciate them, it's in the code. It's their work. I was this idiot who spent eight hours because I was allowed to work longer at my office coding and there was stuff I didn't like. I had to fix. So I went home, I spent the whole night to improve my own code because I want to be proud of my code. And I know a year later, I look back and my code was like, oh no. What did I do a year ago? That's good. When you look back and you code like a year or half a year ago, you think, oh my God, what did I do? It's good. It means you're improving. So be the child doctor. Look at the child and see what's good and where it needs help. Not thinking it. Who has children? Who has the nicest children? Who has beautiful children? I win. The coders, the application is their children. When you come there and kick their children, you don't have to wonder why they're upset. So you're ready to get it with them, evaluate their children and tell them where your children needs more care, more attention to be the best and most proper children they can have. So understand them, talk to them. Understand what is their business. What is their tool set? I come to companies and ask them, what do you do about security? And there still exist and saying, nothing. And I can say, I don't believe you. What? No, we do have nothing with security. Yes, because it not always has a security label on it. Even functional testers, they do all security and no we don't do the security. Because we did create this world of mystics, security is cryptography, it's complex, it's like, oh, black magic. Because when you get a security test, it's there's a development team and then external or even in a different department, that's a security team. All dressed in black, they do them black magic. And outcomes to this report. So what is the developer's tool set? What do they use? They care about quality. And actually, quality and security is not that different because it's about what the application should not be able to do. They have their own dashboards, their metrics. Like Glenn always said, they have the cold quality tools, they have the performance check tools. And that's all kind of security things. If they treat them, they're even better. When you travel, you have PMD, Czechstar, fine box. Oh, yeah, but it's not a really security tool. Yes, but it improves. It takes away the low hanging fruit. And you integrate it in the development street. So the moment I check the code in for Glenn, it's five minutes later, they get a feedback, like, hey, well done. Do you get a well done feedback? Because we forget it. You always get a feedback, oh, it's bad. Bam, smashed it. Also get a thumbs up. Hey, you checked in code? We checked it. It's good. Thank you. Appreciate them. Make them be loved. Make them constructive, improving it bit by bit. Rather than bashing them and saying, this is not good. Reports. I said about PDF reports. I really, that's the first phrase of mine. Many companies, I do teach developers and say, how do they deliver the report? Oh, by email. Don't they tell you about the report? Go sitting there with you, as we say in Netherlands, put your pens down and tell them the truth? Oh, no, that will cost extra. If there's a security guy who does a security review and is not willing to explain what he did, what's going on? What's the worst of this? Is he afraid that you share knowledge? Security knowledge is not that special anymore. And people said, wait a minute, you're going to customers, you explain them about the tools, the methodologies and everything? I said, yes, I do. But then you will be obsolete one time. Oh, I'm really looking forward to that. Put it back my seat, put my feet on the table, put myself in a nice whiskey and all is solved. But I think we will have this when we have unicorns, rainbows and fairies. Because we have so much new developers every year. And what are they trained for? How do they train? Look at the most simple code example, hello world. It's flawed. They're boxing it. Brian Chess, when he worked for Forty-Four, he went to the code orders from Java 24 hours, whatever language, 24 hours. And he went on the code example and said, look at this, this is wrong. It's not secure. You know what I say? Yes, but we have a little remark saying this code is not meant to be in production. This will not work. I teach at universities and tell them about prepared statements, parameterized queries. And DBA say, yes, that's how we, as DBA wanted you to talk to our database. But it's not the security. Yes, it is. And then I see the, their prof like the head down. But this is how he taught us to develop. This comes because the teachers on universities, they have to cope up with the new technologies. So they get whatever language in 24 hours, they have to learn it, making it from glasses and then teach people. They're not experienced developers. The developers should be trained by developers. New technologies like app development, who knew? Who is the filling apps? Anybody? The younger people. Sorry. I don't know the teams that are like 25, 26 years old. I have a son at that age. That's like funny. I could be your father. German accent always helps to get attention. But they're like, ah, one team, there's a guy that's 52 and they're like, oh, that's cool. We have a 52 year old guy in our team. He doing cool stuff app development. He's doing Java for 15 years. Yeah, but now he's doing mobile apps. Hey, come on. It's not all that new. It's flashy. And you need a new guys who pushing the boundaries are really going for. But then everybody, the young guys, young girls run down the cliff. You need all the guys. Maybe send us a smart idea. So variation in age and gender, it's very, very important for a good team, for the security team and the development team. So to make a story, when I deliver a security report, there's a write up in it. And I'm not a good book writer. I cannot write novels. Otherwise, I would have different job, I guess. But tell them what is how I experienced the application. In my write up, I write this application. I just, I shortly, in short sense, I write down what I think, what do I experience? When I click this, it's so, hmm, there's an app starting up. Let's see the app code. Hey, there is an upload functionality. Oh, I cannot upload a file that's bigger than 10 MB and only three lines will be read. What will happen if I do a 10 file with five MB, with one line in it? Make them understand you think in process. So if they can replay it, they will much more understand than it's wrong. Tell the story, tell it nicely. Tell it personally. I really have a hard time understanding why security people charge a lot of money for the security test and they charge additionally for delivering the report one to one. It's a team effort. Security cannot be done by one. When I found my security task force, my previous company, and one internal guy said, oh, we have a project going on, security might be interesting. It's not really in the requirements. I always loved it. There's no security requirement. So can we have one of your team? So I said, okay. The guy from the team called me like, Martin, you have to explain it again. So I went there and said, what's the problem? They are developing and they expect me to do the security. Not working. It's a team effort. Even my own son, all the son, he's now a Java developer. I remember when he started coding and I was telling him about orientation and cold quality and he was like, oh, yeah, dad. So now he started Java development. He's like, ah, you talked about Jenkins. He talked about Sonar, theCUBE. Now he's interested. So for me, the next stage, here is a security book. He's like, thank you. So every time I see him, it's like, how far I owe this book. And he's really excited. As he said, told me that security is what the other guys do. Security cannot give to other people. Security is everybody's responsibility. It's holistic. Make the team go, wow. If you are happy about it, functionality, make them happy about it, it only can be doing what it should do. There has been a law case in America where a truanse company has been hacked. And of course, there's no internal development. So they asked the software company, hey, you delivered us the application of software, it's hacked. And they say, ah, but there's no security requirement. It's not working like that anymore because they said, hey, wait a minute. We are in truanse, we have a lot of money. We sue you because that are all known vulnerabilities. And you have them in your code, so you should know. So responsibility will change. Safety will change. Understand your code when you have a functionality described by default, it should do something else. We also are in a time, in a technology living time. I come to companies about indicating security in the firm lifecycle. What everybody thinks about, stars and stars, technology to solve our problems. I had a guy from the Netherlands bank, it's a big organ for banking in the Netherlands. So I explained about code review, static code analysis. He goes, oh, that's cool. So if you bet code in, good code comes out. Nope. OK, let's start again. To make them understand, a tool helps you, but a tool doesn't do anything by himself. The tool will help you, and that's what you need. Tools are easy access, easy report, understand the report, and early feedback. But all the tools out there, they only can find technical problems, bugs. What they can't do is flaws, functional problems. A question Gary McGraw normally asked the audience. So a bug is a technical problem, a floor-banging functional problem. What do you think was more current? Who thinks there are more bugs than flaws? Nobody? Who thinks there are more flaws than bugs? That's against tech people. Actually, it's 50-50. Half the vulnerabilities we could eliminate by differently looking at the functionality. To understand that, it's like the common sense. But though we are all focusing on the bugs, all the technologies focusing on the bugs, the QP on the bugs, but the functionality you only can find by using your brain. But you can prevent it before writing a line of code, but just looking at the functionality thing, would that be that smart? If you cannot explain understandable what you found, what the weakness is, if you cannot explain the problem easily and simple, maybe you haven't understand it. Just having a toolset, it's not enough. You need to understand it. That's something where I was, and I was involved in that, has a convention of it's taking a very free O-WAS challenges. It's as a CTF, but it's different. Why? It's not about the gold market you upload. You have three questions. When you have 10 points, the first three points go from explain the vulnerability. Another three points you get for exploit the vulnerability, but you get four points for tell us a mitigation. And it's a text answer. So there are real people behind it. It's more effort, but it's a community. So there are 100 people doing the free O-WAS challenges, but there are also 100 people being teacher and telling people about security. That's all we need. Thank you.
Over the years, I have been visiting attended quite some a number of security conferences and got more and more frustrated. Bashing developers, blaming them for writing insecure software, not going to security conferences. It is easy to blame, but what’s the point? During this talk I will show why the security community has failed to connect to the developers and, more importantly, how to do it right!
10.5446/18844 (DOI)
Safety is barely more than a fantasy. And even if systems are disconnected and claim to be highly secure, such as nuclear enrichment facilities or military sectors, there is always a way in, especially for institutions or groups with the right amount of money wanting to make a major impact. In June 2010, a small balleris security company discovered an unknown computer virus they called Stuxnet. The virus used USB flash drives and LAN networks to spread globally. By monitoring the activity of Stuxnet, the experts found out that 70% of the infections occurred in Iran. Was Stuxnet a sophisticated cyber weapon? Who or what was the intended target? Very deep in the 10,000 plus lines of code, experts found the answer. Working like a fingerprint recognition process, the virus was looking for specifically configured Siemens modules. Exactly the same module scheme which is used to control uranium enrichment centrifuges. And the target? The secret Iranian uranium enrichment facilities in Natanz. The virus manipulated the centrifuges and was able to destroy 2,000 of them in Cognito. Today it's known that Stuxnet was a cyber weapon initiated by the USA and Israel under an operation called Olympic Games. This sophisticated attack succeeded in slowing down the Iranian nuclear program for decades. Now out in the open, the code can be used as a blueprint for future attacks. These attacks could happen to almost any power plant, any factory, any ICS that can be found close to your own home. Target rich environments are not just in the Middle East. With massive infrastructure systems, the US, Europe, Japan, Australia and South Asia are also prime targets. The question for us now is not if there is a new attack looming, but when and where. So you probably all know about the Stuxnet and it's a really awesome video. The problem with the Stuxnet is that we know how the attackers got in, how they broke into the window systems. They used zero days and so on, it's all cool. We also know what the Stuxnet did, it just over-proclaimed centrifuges so that they just broke. What we don't know yet, how exactly the attackers designed their payload. How did they know what are the big parts of the process, what to break. So prior to this work, I wrote a lot of exploits on Skater, but it was instances of attacks. I was still thinking, what do I need to do if I want to perform the attack from beginning to end, like entering the plant and finalizing the payload. What should be done, what should we do into the payload? I used to work with several chemical plants, like models of the plants, and then I have extremely complex plant on vanilla citate, and I never knew where to start with. Okay, I have here a plant, how do I hack it? Then I pull this challenge on myself and this is what this presentation is about. So to start with, it's important to understand that we aligned the vocabulary. So what is industrial control systems? It's a bunch of computers which hook up together to control the physical process. The physical process could be water treatment, facility, power generation, assembly line, like anything. And, yeah, so this is something like this. And if you're the typical architecture is, looks like this, having the physical process at the bottom and a lot of layers of IT system on top. And the data flows goes from bottom to top. And the entire purpose of having this IT infrastructure is to get the data about the state of the process and process them somewhere in the upper layers of the IT infrastructure and then to decide like how to control the physical process. And that's why the systems also called cyber physical systems because it is IT systems deeply embedded into the application and physical world. And in contrast to the traditional IT domain, the interest of the attacker, not the data, it is such. Not the data. The interest of the attacker is in the physical world. The attacker wants to do something to our physical application to bring it into the specific state or to make it perform in specific actions. And continuing on the topic of movies, you know, like James Bond is probably like one of the reflections of all like cool trends, what is happening like in the modern world. And there was a like clear transition that we used to have all of these beach and ghost scenes and in the last, in the Sky Falls, in the last James Bond movie, we already have hackers. So which is of course for me is a little bit pity. And in the upcoming movie, we still have a hacking story again. And if you remember, like in the Sky Falls, they have this scene when the villain, Silver, he's sitting in this prison cell. And then they, what they did, they hook up his computer to the MI5 network and then his computer, the virus from his computer entered the MI5 network and then it's the silver already programmed that Malware to open his prison cell so he could escape. So this is example of cyber physical hack. You launch and attack in the cyber domain to achieve certain specific desired effects in the physical world. So you see cyber physical security and cyber physical hacking is becoming like popular. And is I already told, like the goal is to understand like, so yes, we need to penetrate into the cyber like into the IT systems. But then we need to program our cyber physical, physical payload. And then we need to put specific instructions which will bring the physical system into the desired state. So the talk which I'm giving today is this, I gave this talk also at Blackhead. So I picked a chemical plant or specific chemical process and actually, so it turned out the attacker actually has to go through specific stages, sink it as a kill chain. At each stage the attacker will need to complete a complete specific actions. And this is what I will present in this work. So to start with, like a little bit also, like what is the issue with control system security team? So what we know by now is just like the terribly insecure, the vulnerabilities that discovers impacts of 2050s every day. It's kind of boring already, it's not interesting anymore. What is the problem? Truly the problem. So the responsible third is issue in the advisories like, okay guys, there is a vulnerability, take a pay attention, ask for a patch. But what the advisories telling you like, okay, the impact of that vulnerability specific to your organization and it's your role and your job, go and understand what exactly does it mean to you. And unfortunately the operators don't know how to evaluate it. So for example, one of the like recent cool and highly publicized vulnerabilities were in the industrial switches. And in the mass media it was like publicized like, oh my god, the attackers can now do whatever they want because they have now access to nuclear facilities around the globe. So like, well, okay, here's your plan. So now assume that you have access to some switch inside of it. How exactly are you going to make to put this place on fire? What will you even start with? So the typical understanding about post-exploitation in Skada, it's just like this movie style, like once you hacked in there will be some red button which you press and like the system will fail in exactly the way you need it to fail. The truth is this button doesn't exist. And the attackers actually have to build this button to build this payload. Just to give an example, so it's also believed after my presentation I typically get a question like, well, but it must not be that complicated. You can do something to the system. Well, first of all, the attacker is rational. The attacker has specific goal in mind. The attacker will not hack into complex facilities just to do something. And the second, for example, even if he wants to kind of fail the plan, still he needs to understand what exactly where is the vulnerability in the process, in the equipment which he can exploit, which will allow process to fail. So now you see here sensor signals of the reactor pressure. So reactor typically is the most sensitive component at each plant. So for example, this is like example of four attacks at random time, random. So it's just like you know nothing about system, I'll try to attack. And you can see the effect can start from like, well, I don't care it's just a glitch to like it can be economic in efficiency. It can be near miss like you almost reached safety accident, but you did not. Or it you actually can really by chance cause safety shutdown. Although safety shutdown means that the plant will go into the safe state and it's not explode. So you can see if you really want to do something specific to plant, you need to understand your plant. And therefore, when you try to evaluate the impact of these vulnerabilities and I've done several already works explaining how will you be evaluating this impact. You need to know exactly so the operators of the facilities need to know exactly how what what the attacker can do with that specific vulnerability. What kind of attack can they launch. Is an any necessary conditions are required. So there must be also, for example, the attack and may have a perfect plan, but the control system will block commands because okay, this is a stupid command. I'm not going to follow it because there are a lot of safety precautions in the control systems. And also you need to understand how the severe the potential impact. So answering all of these questions require understanding how the attacker interact with the control system and with the process. And this is for by now it is the largest mystery of 21st century. Nobody knows. So this is exactly what is my work about. And in order to understand all the necessary like next steps like how the attacker will be performing such attacks. I'll give you the basics of the process control because without them, I mean, it's important. Those who knows can use can tweet for now. But I'll give you all the necessary basics so we will understand. So what is the process control? This is way to understand process control is on the example of the heating system. So for example, like in the early ages, we used to have like this manual valve, how you control the how much fuel goes into your furnace. And like you just like physically feel like is it cold or warm in my house and then you manually adjust the income of fuel. So in the 20th century, we automated everything. So you now have a thermostat and you start with a set point. You start like, okay, which temperature do you want to have? And then the thermostat measures the temperature and then control the inflow of the fuel into the furnace. So it's all happens in the control loop. So you have your the sensors measured temperature in the room and sends this data to the control system. The control system computes the difference between desired temperature and current temperature. And based on this difference, it computes a control command to the fuel valve. So all process control in any cyber physical system cars, aircrafts, robots happens in control loops. So there are many control loops in the chemical plants. And the ugly thing about them that they all interrelated. So for example, if you adjust something in one control loop, for example, here, this is in red, everything else changes. And the attack is actually has to take all of those effects into account. In the real life in the large production, so the operations are much more complex than just controlling the heating. So typically that is already you need a specific control equipment, which is called programmable logic controllers. And they typically look like this. This is like Alan Bradley. So then this is like water treatment facility. It's like real photo done taken by me. You have sensors, you have actuators in this case it is a pump. And the wires from the sensors and actuators are going into the wiring cabinet where you have your PLC. And the entire control still happens in this control loop. So the signals, measurements, process measurements done taken by the sensors go into the PLC, they're copied into the input buffers. The PLC executes control logic and computes control command and sends them to the actuators. So what is control logic? Control logic is a program inside of the PLC. It defines the logical sequence of the events, what should happen to the process at what time under which conditions. For example, if the pressure in this PLC, which controls for example the reactor is larger than 1800 kilopascals, then reducing flow in the PLC3. So it just defines the logical sequence of the events. What it also does, it defines what should not happen at any cost. So we typically in the like maybe 15 years ago you had a lot of redundancies in the processes like kitchen basins, manual valves, rupture disks. But maintaining this mechanical safety precautions, they are extremely expensive. We all now try to optimize, run it faster and more cheaper. So the safety measures are now like moved into the software. And the logic you will have, since the control logic you will have the so-called interlocks and also master stop. So for example, even if you maybe for example, you can break the motor if it will be running without oil. But if there is a condition like the motor is running and there is no oil, the system will stop because it's unsafe operation. So even if the attacker wants to do something to the system, the system will not allow the attacker to do that. Unless the attacker has to do next steps and reverse engineer the logic and rewrite the logic. The control logic does not compute control commands. So the control algorithm is what really compute the command to the actuators. And it looks like for IT people, like they find it's inattractive. And this is typically when all IT people say, well, it looks ugly. I don't want to hack physical processes. So if you see here, we have still the set point. We have this measured value, like this is the process of the state. The control algorithm compute the error and then compute the command to the actuator based on three components of the differential equations, which is called proportional integral and derivative. And the response will depend on these coefficients. And actually finding the right coefficients to control the process is actually one of the hardest task possible. And typically it is done by consultants which earns tons of money. So and for the attacker, all of this is extremely important. And all of this data and coefficients are important because this is what defines the response of the process. The PLC cannot do all the job on its own. You still need the human in the control loop. Because the PLC, they don't have the entire picture of the process state. And also PLCs, they don't have time trans. This is what is observed by the human operators in the control room. However, there are like more than 10,000 measurements in each plant. Obviously the operator every minute cannot like monitor all of them. So typically like things go wrong in the plants all the time. So the operator has a lot of alarms flashing on his screen and his main job is just to respond to alarms. So he's not actually, if alarm is not popping up, he's not watching on those parts of the process. So with that you really now know everything what you need to know for cyber physical hacking. So the next question would probably would interest you like why do I want to hack the SCADA systems? Like what is there for me? So first and foremost we think of the cyber criminals because like most of the hackers, those are cyber criminals in the try. So they had in order to monetize their packs. And industry it's a lot, a lot, a lot of money. If you will know how you can monetize your attack you can become a millionaire quite quickly. And one of the scenarios like which happens quite often is actually extortion. So you demonstrate the process owner like facility owner that you actually can do something to your processes then you extort. As you can see the link is still which is the link which you see is from 2008. There was a lot of regulations how you handle the accident data and most of the accident data are actually classified knowledge. Therefore we didn't almost see nothing in the mass media. For us it's very difficult to observe but like know exactly what is happening there. But before those regulations took place we could really, like most of the accident cases are like kind of from earlier ages. So let's say we are cyber criminals and we want to do something to the process. Actually I just was also given a talk in Prague where I was explaining, I give a talk, I explained a lot how actually you can monetize KDA attacks. Because there are a lot of scenarios. But to start with the attack you actually need to understand like what can you do to the process? Like what can I make with the process? And all the attacks on physical, like for example, industries can be divided roughly into three groups. The first is you can for example damage the equipment. This is breakage attacks. You can either, for example, over equipment, over stress. This is what happens in Stuxnet or you really can break it by violating safety limits of how you should operate the equipment. The second group of the attack is production damage. So you basically make the plant as profitable. So you can for example mess up with the product quality or make plant producing less. You can launch an attack so that the cost of production will increase. For example, the usage of energy or loss of raw materials in the purge. And the third class of attack in this group is maintenance effort. Like make process misbehaviour so that the guys will need run and troubleshoot it all the time. Maybe invite external consultants and so on. Like make them worry. These two groups of attacks will never make into their life in the newspapers. Because the companies must not report them and they will not report. Because it is your reputation and actually if you will report you actually may even pay some penalties because you are not maintaining your plant in the right way. So this is what is like the most cases the companies will not even report it to the authorities. If you want really the plant into the mass media and damage the reputation, you would want to make the plant non-compliant. So most of the industries, like almost all, they are heavily regulated and actually the regulations are publicly known. So you can make plants non-compliant. And the most damaging attack of course will be safety, which is occupational safety. It is humans and environmental safety. Like for example, large spills of oil. The less damaging attack would be pollution, which is like environmental pollution. For example, contamination of soil or water or exceeding concentration of heavy metals in the emissions. And the third is like contractual agreements. So most of the industries they are obliged to deliver the product at specific time. And every day of non-deliveries like cost a lot of money. So for example, let's assume that the attacker knows, understands all of this. So how to choose like which attack would I like to launch. And this is how the attacker would be like thinking. This would be the thinking process. For example, equipment damage is something that comes into our mind first. Like let's break something or blew it. The downside of the breakage is that it's irreversible. So if you want to use it as extortion attack, you can't undo it. So nobody will believe you next time. The negative side, collateral damage isn't clear. If something will burst and there will be human in the vicinity, you can hurt the human. And then with that the attack will become a compliance attack. So is it good or bad if it will become compliance? Well, the negative part and this is what I'm now talking about this argument is that the compliance attacks must be reported to the authorities. And if it is reported to the authorities, a very serious guys will run after you. Unless you're completely sure that they will not be able to trace you, you would not want to launch a compliance attack. But maybe if you really want a company to come into the headlines and really damage the reputation, then you want actually to launch compliance attack. Because if there will be multiple violations in the plant, like repeatable, then they actually will be shut down. A more negative again, this is unclear collateral damage, because for example, if you will try to, like, let's say, make plant contaminate the local water, if it will also kill all the fish, suddenly it will become safety issue and again very serious guys will run after you. So unless you're not sure that you can hide well, then you don't want it. So it seems like among all the cases, this production damage is actually the most attractive because it must not be reported, you do not hurt anybody. So it's a really safe harbor. So that was exactly the case which I choose for myself. For like, I want to design an attack from the beginning to end. And this was my attack scenario. I want to cause persistent economic damage. For example, this attack scenario would be useful as an argument, an extortion attack, or if I want to kick out some competitors out of the market. And actually this type of attack in the IT business happens all the time. Larger company hire, blackhead hackers to hack into their smaller and medium companies to kind of destroy their competitive advantage. The key word here is persistent because, like, the difference between IT attacks and cyber-physical attacks is that you cause effects in the physical world and you can't hide them by simply raising the logs, like, no, no, no, nothing happens. So the guys will notice. Your task is then to make, first of all, not to rise alarms because then the guys will start watching. And secondly, make the attribution so, like, design your attack in such a way that they will not attribute it to a cyber event by simply like natural misbehavior of the plant. So this is what is important to take in mind. So we're now ready to start hacking. One of the difficulties and why we do not see a lot of research done in this area is that in order to, like, to learn to hack something, you need to have that system in hand. Like, in a computer or software world, you have it. So you actually, for example, can buy a plant and, like, try to test, like, and exercise your skills. The problem is that the plants are extremely expensive. Secondly, you need an army of people running it for you. And, certainly, if you will break the plant as a result of successful attack, you will need a lot of new money to repay it. So it's kind of not very sustainable approach. Therefore, like, entire research in the process control, chemical engineering, control engineering is happening on models, on the realistic model of the physical objects. So this is what I've done. It's a very accurate and realistic model of a vanilla citate plant. Vanilla citate is a commodity chemical. It's used for, like, building blocks for, like, paints, adhesives, plastics, raisins, and so on. So this is what, this is the case study for this attack. So as I mentioned, the attacker goes through a series of stages before he is ready to finalize the final payload. So this is the stages, and we will go through all of them. And at each stage, the attacker is, like, the attacker is not able to actually, like, in the first iteration, the attacker is going through stages. But then he might go to previous stages if he forgot something. So it's actually, like, kind of interconnected. You might jump between the stages or sometimes repeat exercise on the same stage again and again. So let's go through them. Access. Okay, this doesn't work. I'm sorry if I'm speaking fast because it's a very intense presentation. It's a lot to say, and I cannot kick out anything, so bear with me. So access stage is the most familiar to all the IT hackers. So for example, this is a typical layout of the industrial control system. So you, for example, if you start from the outside, you just, like, for example, find the zero-day-insung computer in the office network, then you use any ingress connection into the control network. For example, you get in with the updates or with database, base links or backup systems or anything. And then once you're in the control system, you can move freely because there is no security there. Obviously, you will still need to exploit the industrial devices. And if you don't have experiences in that, no problem because there are already exploit packs which you can buy. So every publicly known vulnerability is already compiled for you into exploit pack. Just buy, it's not free. It's still not metasploit, but you can buy it. Basically, so for now, most companies already starting getting this right. So it might be a little bit more difficult to use, go through this one. Plus, no problem. You can go directly into the control system because now they put all of the industrial control devices onto the internet. So what you do, you just go into the ICS sort advisory database. You select the vulnerability of your choice. Then you use showdown on any other engine to locate vulnerable devices, exploit your in. So this is a modern way of doing that. But so the access stage is actually the last stage which has anything to do with IT. From now on, you have to start thinking of the process engineer, chemical engineer, control engineer and so on. Now we are completely new domain. So discovery. First and foremost, you need to understand what is this plant is doing, how it is doing, how it is building, what equipment is there and so on. Do you know what stripper is? But no, it's not Magic Mike. It's a stripping column. And this is what happened to me when I first Googled stripper. You really, really, really need to know the specific equipment and it is out of our expertise of traditional IT people and I am coming from IT domain, from telecommunications. So the attacker needs to figure out what the process is doing and how. Even if the attacker knows that he is in the vanilla state, the exact chemistry and kinetics of the process is unique to each plant. And the attacker needs to figure it out. And typically this information is not in possession of the plant owners. This is done by third parties, by some subsidiary companies. So the attacker has actually performed this reconnaissance through the third parties. Then the attacker needs to know how the process is controlled, how it is building wired and of course operating and safety conditions. The necessity of this stage of attack is well understood by the attackers. And this stage of cyber physical hacking started long, long, long time ago. We hear about espionage attacks all the time. Like every week there is some campaign and they are already going on for years. And this is like really with samples of the Malveys from 2Solvents' Cream. So the attack is interesting in something like this chemical formulas. This is piping in instrumentation diagrams. This is for example instrumentation list wiring diagram. So all of this is necessary for the attacker to reconstruct the layout of the plant. And when the attacker a little bit starts understanding how the plant works and builds, he can start making first assumption what kind of attack he can launch. So we want to cause persistent economic damage. One of the easiest, like the first way to do it is for example you can destroy the pipe which carries the final product. This is very effective. The problem is that it can be noticed quickly, repaired quickly. You cannot persist with this type of attack. The rest of the plant can be divided roughly into two parts. Reaction and refinement. Refinement is the largest part of the factory. It's like a couple kilometers long, so you need a bicycle to go from one part to another. So the attacker has a lot of opportunities to do something to the process. But also the operator has a lot of opportunities to notice something and actually respond. And actually if you for example, if the product will be not pure enough, you can just actually refine it back. So it's kind of tricky. The attacking here could be tricky. In contrast, if you will mess up with the reactor itself and make for example reactor producing less, then actually you already reliably have much less product. So this really sounds like a good attack scenario for persistent economic damage. Because if you produce less product, then you just don't have. So there is nothing that the operator can do. But how do we do that? So how will we make the reactor producing less? That is kind of, this will come already later. So further attack needs to, at this stage, the attacker is still not ready to design the damage attack because the attacker still does not know what is his capabilities to control the plant. So the attacker will still have to, at this stage, the attacker will keep discovering the plant. To understand what is his capabilities. And one of the most difficult part, like hacking part of this stage is to reconstruct the, to map between, for example, this is a pump on piping instrumentation diagram and it is a pump on the plant. And find the link between, so it is allocated somewhere in the PLC, somewhere in the control logic. The attacker need to reconstruct this link. And this is one of the most time consuming and difficult parts because there is no direct mapping between all the stages. So the attacker will need to, like, exfiltrate a lot of documentation. They need a lot of engineering knowledge to perform this mapping. And for us it was also extremely difficult. What, and interesting enough that we already have a malware in vial which already tries to do that for the attacker. One of the way to perform such mapping is to hack into the OPC servers. So every equipment in the field like pumps and sensors, they speak some proprietary protocols. And all the equipment, IT equipment on the upper layer, they speak Ethernet. So OPC is actually kind of a link which allows these two worlds to speak to each other. So last year we, like there was code the Havocs malware which was trying to map the OPC servers, clients and servers. And this description of the Havocs which you see on the slide, it's not entirely correct because the description was given by the IT company which maybe not so fluent in the terminology. So there was still no, there was not still discovery of the equipment in the field, but they were trying already, it was kind of extrapolation. So what the malware was doing, and maybe malware is already doing, but we did not catch the sample of that malware. It's like potentially they can, we can really now start mapping devices in the field. But in this version of Havocs it was just really discovering all the OPC servers, clients, versions and so on, preparing for the next stages. So at least we now that the attackers at least this far and they also understand all of the stages. So in order to control the reactor we need to find all the controls, basically actuators which are around the reactor. Like all the pumps, all the motors, all the valves. So in the vanilla citate plant this is the controls which we were able to locate. This XMV, well that's just the name of the variables. If you think that this is like hooray, we've done it. So we now have the controls, let's start control something. The problem is that obtaining control in the, like in this engineering field, like having the control is not meaning that you can control something. The process, if you will try to control the process it might misbehave and it will not necessarily comply to all of your commands. It's kind of for IT people it's very difficult to understand the concept, but I will explain it to you. So with this we trans, transit into the control stage. So discovery stage was about static discovery of the plant. Everything was static, the time was zero. In the control stage we start understanding dynamic behavior of the plant. Because whatever you do to the plant it causes effects into the downstream and upstream. So this is important concept. Once you hook up equipment together it starts being linked to each other not only via protocols and electronic links, but also by the physics of the process. So for example you can cause cavitation effects which is like bubbles in the liquid in one part of the plant. It will propagate and prevent the pressure sensor taking pressure measurements. So even if these two components do not talk to each other electronically and maybe even belong to different segments of the network, they still speak to each other through the physics of the process. Therefore the security boundary in cyber physical systems they are not limited to the cyber domain. They propagate like they are into the physical domain. So and all of these effects and interdependences the attacker has to take into account. So you've seen already this picture. This is like I, for example, I manipulated this valve. And interestingly enough for example if you will see like these two physical values. I don't know what those are, this is temperature and flow. This is flow, this is temperature. You see they kind of respond similarly but in opposite way so it's really funny why it is this way. The problem is that there are millions of parameters which influence the behavior of the physical process. So this is like it's a slightly larger representation of the control loop. So there are a lot of components which are involved into the control loop and which has impact on how the process will respond to some command. And like when I was designing my exploits I took like all of these I was taking into account and I have to actually code into my payload. So I have to take into account all of those effects. And for example what we already be talking today about for example controller tuning. This is which is really like why it is important. I'll show you. So for example what was difficult for us and this control loop I was not able to control actually. This is the example when you cannot control some control. So for example if I operate this valve it starts, it has a ringing effect which is caused by the negative real controller pulse. This is like when the solution of differentiation equation does not exist. You would say like why should we even bother because like this ringing effect like so small like why do we even like look at it. The problem is that this ringing impact propagates downstream and in other control loops it cause already impact which is extremely large. And then all of these high points were causing alarms. They were hitting alarm. So and since I don't want to hit alarm there was no way I could like operate this valve because it was so basically this control was not useful to me and this control I could not control. Another reason why the process behaves typically in a very strange way because it is non-linear. All physical processes on planet earth they are non-linear. What does it mean is that for example if you hit the water from 70 to 80 degrees. It behaves completely different as when you further heated till for example 90 degrees. And we only know behavior of the physical processes to the extent of the modeling. We model every physical phenomena and then this models we load into the controllers and the controller controls the physical properties according to the control model it has. So if the process has never been expected to operate at 90 degrees the controller does not have control logic to control it at this temperature range. It means that and since the attacker will typically try to move process somewhere in the state out of the let's say optimal operational boundaries. This is where the controller will not be able to control the process so the attacker also cannot control the process. And as you can see for example here it responds to some control command and you can see it's really non-linear and all of this overshadowed the cause alarms. So the challenge of the attack is also to understand when he manipulate the process he observes the response and he does not know whether it is the effect of the attack or it's the property of the system design. This is a huge challenge to the attacker. So and when he tried to understand dynamic behavior of the process he needs to take into account all of this and try to kind of understand the effect. So we started two types of attacks step attack when you bring process to some state and leave it there or you attack the process let it recover lunch another attack let it recover and so on. So and the outcome of the control stage is that basically yes this was the result of the control stage so we tried to kind of all of the results trying to understand to get some mental picture of the dynamic behavior. And right now it's a work in progress we're trying to find the nice way to how to map the physical dynamic behavior of the process into some process fingerprint. Kind of like creating sort of a hash function only that you can also read it back. And I think that I guess we're also doing that and probably I'm sure they will be trading that in the black market. So the outcome of the control loop that you have to categorize control loops into which which are reliable and you can control and which are those unreliable and you cannot use them for that design. And also you have to understand the parameters which type of parameters of the process will cause alarms and which not. So this is nothing to understand is just like we have to kind of finish with a reliable control and understanding of the alarm activation. So once we done with the control stage we can really start thinking about the damage like what kind of damage we can cause. And so we are now in the fourth attack stage damage. This is one of the most difficult stages for the attacker because you need to expert knowledge like input from the how the system fail. The most easiest part is just like start reading the accident report. They're all so if the system failed in one in this way there is a good chance that it will fail in the same way again. And all of that information is public. You can also find a lot on YouTube and and so on. Why so I will try to emphasize again why damage come after the control. And for example the CCC presented a beautiful damage attack scenario like let's poison the catalyst in the reactor. If you poison it's extremely expensive attack to the to the facility owners. This is the most expensive what you can only imagine. So I'm sorry this is yes. And so the goal then in order to kill the catalyst in the reactor you need to rise the temperature in the reactor above 200 degrees. The problem is once I presented it in this at the conference I returned and tried to implement this attack. The problem was I was not able to control the necessary quiet control loop. So I was not able to rise the temperature to 200 degrees long enough so that I could kill the catalyst. So therefore like OK even if you will try to come up with all beautiful damage scenarios but then you cannot implement it because the control system does not allow you to do so. All of your previous efforts were useless. Therefore we start with the control start to understand what you can control and then with those control you can start designing damage attacks scenarios. And you will probably want to design several scenarios because you will need to put like if one into to put into your payload because if one does not work then you can use the second one. So let's start with the damage. One of the challenges of that for the attack of that the process is actually not designing a hacker friendly way. So for example there might not be sensors measuring the values which you need for your attack or the information about the process can be spread it over multiple system and you have to run and break into all of them. And or maybe the control loops do not control the parameters which you need for your attack. So how it was in our case. So we want to produce less of their product. So we want to produce a reduce the effect effectiveness of the reactor. So in order to be able to measure impact of this attack we need to measure we need to be able to measure the production the concentration of vanilla citate molecules in the reactor exit. The concentration of chemicals is measured by the analyzers. There are four of them in the plant but none of them in the reactor exit. Why because analyzers are extremely extremely extremely expensive and they are only in those places which really really necessary for plant operations. So and actually to to compute like how much less product we is produced we need to flow and we need concentration. The only place where this combination is available is here at the end of the plant. But this measurement will be available to the attack after eight hours which is too long. You can't operate something wait eight hours and see what is happening. So we really really really need to find the way to measure this effect here. But we can't because there is no analyzer. So the only two measurements available to us flow and temperature flow we do need but we don't have analyzer. So but we are hackers so we can always find the way. So actually there are two types of answers can be in the process engineering engineering answer and technician answer. Technician answer tells you OK something is decreasing and engineering answer is actually tells you it like how fast or in how much time. So this is very useful concept for us because we actually can use temperature measurement as a proxy measurement for us. So actually so if there is less reaction happening in the reactor then the temperature in the exit will be lower. So if you will look at the reactor temperature it will indicate us how much reaction is happening in the reactor. So basically through by looking at the measurement we can understand whether our attack has impact or not. But unfortunately it still does not allow us to precisely compare the effectiveness of different attacks. So we really really need to find the amount of chemicals. So and this is where we've been stuck like for a couple of weeks because it seems like we don't have we can't proceed with our attack. So then I spent several I had several internships when I was going to the refineries. So I kind of understood I know the systems very well. So and I know that inside of the PLC there are a lot of intermediate computations happening also in the upper layers in the for example in the optimization application. There are a lot of computations are happening in between in order to compute the most effective control commands. And the challenge the thought was like maybe there are some intermediate computations which will actually be helpful for us. And after a lot a lot of hours of work we actually find the place in the code which could be useful to us. We were able to extract these numbers which actually at the beginning did not tell us anything. They did not sum up to zero or to one or to hundred and if you multiply it with flow did not give us any useful numbers. So basically after another two weeks of crazy mess we were able to figure out compute the concentrations. So with that we actually could compute the concentrations of the vanilla state in the exit and we finally could transfer translate that number into amount of dollars in loss. So the outcome of the damage stage was we could to go right the control loops by the damage potential like how much money the plant will be losing if we will be attacking specific control loop. So this is the outcome of the damage stage. So and then you basically you will encode into your payload attacks on those like on a couple of control loops. And at this stage we are not done yet as I told you at the beginning that it will cause physical the the interpreter will notice that the plant is producing less and they will start investigating. So you want to create a forensic footprint so that to mislead them what is happening with the plant. So this is a slide just to show again that we have also human in the control loop. And so how could you do like what can be done for example there are many attacks to notice you can for example launch attack like for example only on the rainy days or only on the sunny days. You can also for example launch your attacks at the particular employee shift so that the employee will be investigated and not the process. So for example this is there would be the action plan pick several ways to raise the temperature in the reactor wait for the scheduled instrument recalibration perform the first attack wait for the maintenance guys being yielded and recalibration to be repeated plays an extra attack and so on. So this is this for example this is for different attacks scenarios which cause the deviation in the temperature of different amplitude and just like play them at the opportunity time. So if after some time they will actually start doubting the reactor like okay there is really already not a guy but really something wrong with the reactor. They will actually invite the professional forensic guys who will be investigating like what is happening with the reactor. Nobody can see what is happening in the reactor but our reactor is analyzed based on the set of metrics. So the attack I need to understand which metrics they will be computing and then kind of plays the attack such way that they will mislead and they will not be able to figure out what is happening with the reactor. So these graphs are ugly and like say nothing but this is just like different metrics which I used to analyze the reactor. So basically this is like everything that I presented to you is just like summary again so at each stage the attack I need to there is a set of actions or task which the attack I need to accomplish at each stage. These are just an examples and finally all of that will eventually bring you into the final payload. So the afterward. Well it's really true that the systems, the industrial control systems are terribly vulnerable. They all put on the internet and the attacker can get access. So this is the state of the art. Nevertheless we still don't see large hacks like or something like blowing because it's extremely difficult. It is extremely difficult and if you look at the latest paper from SANS the precise targeted attacks which I described to you right now they put is like the most difficult to accomplish. So the consideration of the attack so as a defender what you can do is always just arise the cost of the attack because for the attack the cost of the attack can quickly increase the damage worse. And what is also important to understand is that actually certain tasks which are the attack needs to do the same for different types of cyber physical systems. For example I already designed several payloads or attack instances which can be reused in different types of cyber physical systems. So my personal opinion that metasploit for SCADA payloads are just a matter of time. So I'm sorry if I took more time but thank you very much for your attention and I'm available for the questions later.
Fear of cyber-attacks with catastrophic physical consequences easily capture public imagination. The appeal of hacking a physical process is dreaming about physical damage attacks lighting up the sky in a shower of goodness. Let's face it, after such elite hacking action nobody is going to let one present it at a public conference. As a poor substitute, this presentation will use a simulated plant for Vinyl Acetate production for demonstrating a complete attack, from start to end, directed at persistent economic damage to a production site while avoiding attribution of production loss to a cyber-event. Such an attack scenario could be useful to a manufacturer aiming at putting competitors out of business or as a strong argument in an extortion attack. Designing an attack scenario is a matter of art as much as economic consideration: the cost of an attack can quickly exceed damage worth. The talk will elaborate on multiple factors which constitute attack costs and how to optimize them.
10.5446/18843 (DOI)
És Arc Gustav más folyamatosabb kanálóosa vágás, mit akarom kинett SciV unravelim, ezek a helye magyar. És most az önzefizar anywaysht Track finom pillanat, hogy Tába szükséges knobolt mint a nétra c első modele szükséges ugye feléréhenek és van játni Ellen, lenyszerlik át, ahol elég az comprominá carreteraaztát ordinaryan transplantáció eltalítani a caterólból, de ang intertw interfer2021 haveejünk lesz, Save de Hez h痛 Strandv preliminary phenomenon sem vannati a повelet OK Lipárat, seinen együtt szechárut, von lease-et fresh 130 yell Van legalább rendesztét, elsz�oményő megtal faultozások sleep 78 benne. Ha szól goeszom, liesk resume anyere vår kövék ai, mindenbart éc dona szerepe, hogy kor spell Strö cozodjon valam45eli. az �rig Saturdays� Soldado- mogulással atatatítanδس felszorulött, lehet segítUTÁG Withoutons Sponge SERVAGONいい<|am|><|translate|> ez az ered dotted edges 새� funciona adtak. Egy crossed slope is nem bekülös vagyijk injuredak. Én nagyon szinte lef Flowers-nboundt szarcell penalty sajta plk játsított ez ell Hero没有 lo images ker onwards más az ered toy a stuk a lemon Mعتel csinál quemi a a erre a acetámí absolutamente nem nerede gondolkozik, hogy a zjá�ention geyszernek уже gamutál нашей Snapdragon amiózzá a ményre Pancho indirectesak찬igibl van, amivel ráad legalált AT5-re, ha castel Fansha szinténい WorkoMy, mi extremületetぜve kell zene, hogy seket lehet krétni mozzá boo laughed verni, hogyha most elkezd ezt a akciókkal meetint sealedoknak lenne, sixtetmen gondolják alleretnek lotom Greateradus, vaugas texting és PSC. A nap dublas rév dads állom, hogy felszel as simplest srácot bubblegatja. A Neus se 그� consent seket válincluding accu油ha is ajánl internshipre hollalá, és azt áltam, urinerive megроде. a kül..... ő k cuartoik semmi marad<|is|><|transcribe|> Egy nyilvánkemban és replacegál ه comercialit Ottolak conosakillomban lehívom prontozásUND住.... Ez pedig bore A szeg2021 is az eskiespedingnekmutály DANIELS travelled a csikatú splos Cordtó, a befraffiósажal érdezuk. És mi Artistia pedig zene is miArentxel se jelentni Gr strangers fromisalkjás A видели connect Pr Lima Egy későerين el basszaait Az étk которой F trials a különbözőségeket, és ez az egész. Ha nem tudom, hogy a ráadom generációk azok, akkor nem tudom, mert azok, hogy amikor szükségek, akkor kell érni a különbözőségeket. A különbözőségeket, a különbözőségeket. A különbözőségeket, a különbözőségeket, az egész. A mondatás, az egész. Lehet tantaszt portrayzCIus kihívnak leanek Leslie f k Carryman仔sa, a superficial cilvánkeissez kokingjük ki a leiségezt, ahogy pontak którego chess drained. Nekrény uratok UA menjük el, hogy miWait Pod Touchon canon oozott vezető printersot kробít off a rádio, ez a különböző rádio, ami a különböző rádio, mert a különböző rádio, hogy a különböző rádio, hogy a különböző rádio, az most important karakteristik, hogy a rádio tégedipLSZIK頭ztási rédis treatments fierceel belonging hozzá ID-t a s Gor Most nem rá a gondknak kérdés inducedbíjet, van Wangri couple, és EMANG-laben időakyzlásra és nem dlาty được engedni, Tennek a kevés Ckonos constitutevason幹al satán photographsra a kezериert, ahobb tinyozás szmlall lép megkerül a francát v.. washer a zipcsját Még mindig nagy az éve fel ismerik, de a szívár, az utolsó kérdésre, és az utolsó diagram, az utolsó kérdésre, az utolsó kérdésre, az utolsó kérdésre, az utolsó kérdésre, és ezek a utolsó kérdésre, ezek a utolsó kérdésre, ha előtt lehet. Nézzük egy kicsit más, úgy embereken azok a éve hülykereket talált a túl vigyáral, ta aperturefégem childhood a szígi sfeleket hülyéket. Meg 먼 actinghelyet darásd a kérdésre tudmann principle épületed kérdésre, és a fél dries handler éve a próximo éve Assegy, Igazából felfiziren сумmatozni meg az alternáció és előttworkah…" Ez volt a figyelmerED, ugyanálal ampúly ل惹이 closest át szemÁrt. t Selected P A últomás finomra reggelére tulajdonkig semanas GG Mikormarot, Aztán sanacci borgut, ebben légy being ◦ � Krankó pyt contrary,英 alive drew mehvers pocket 830d játérő cmby az Új, amitện is haber és a rédár a szvenság közletesезére is érkeztet剛剛 designer. Megövettek bék musiciansra és azíamos szíveset выходítsák.音atorági gondol dialectet és posterspozája puhát le Aah járissa meg felejăng durváig. Csak azt hiszem, hogy hogyfosinaitted és annyis a fődel nem bíjottan álltod, az előtt az eszetre fogdâltamcasesat benne a nagy szegély Croatia supply. The third one you see is an acoustic delay line for your TVs, it's very small very stable, very low distortion and extremely cheap. How does it work? The electronic signal is converted to Ultra-sounds and send through a moderately shape quartz crystal, A rítás hogy haladы cartridge-s clászix majd állод vanilla isoluciós Akkor láttál az utolsó washesgdisztorokért tolerációszuk, és intake específicsentialosra és abszaratorokokosra is f journ trekésre. teaspoon állalesiország bolyan lesz k給 parfaituna a medal jövven maskóat a fláját e lett a napbert Tamал cryptocurrencies Whose mother freshmenkáptal is nincs ki, hogy ez a paprotektív mik,… napőst puhaztak szövetszre t painted penninget és ha szeretnél mondani Suzuki rengeteg egyала,j donating Perי képvis bail onun mi permet. Azt predict lay-lic és debi helyökre vállik, elm diyorum, hogy én tomarok a szerep helyökre próbálom azért. Egyed van az, a mo freediek várja az caucus unit. cείcsebe kib VERY ZI, megbizolvan a le personality, hogy le kell tudjuk focalentni Neyt-i legterem, mint a C fois-keton k ihrem stakeholderse, oftall textures-etréfriendet dolgák, attendek tutelmereti a senylező pneum Musk Hát jól ennél érte ide, nem értem csak a mikrofonba beszél. A választ volt, talán kifogd el, hogy mi volt a kérdés. Yes, yes, we can do it. Igen, a választ, hogy egyébként nem dedük, hogy saját is kimílt a kérdés. Így voltak, hogy szeretném, hogy figyelmek. Fel kitegé için!
Ha a fizikai rétegrol beszélünk, mindenki drótokra és optikai kapcsolatokra gondol, miközben a modern kommunikáció jelentos része vezeték nélkül zajlik. A rendelkezésre álló frekvenciaspektrum jobb kihasználáshoz az eszközök frekvenciaugratásos technikákat használnak, azaz mind a leadó-, mind a vevo-berendezés másodpercenként több ezerszer vált frekvenciát. A megfelelo kommunikáció biztosításához a leadót és a vevot szinkronban kell tartani. A kereskedelmi használatban ezt frekvenciaugratásos eloírások biztosítják. Ha a kommunikációt tovább akarjuk titkosítani, csak annyit kell tennünk, hogy nem szabványos eloírást használunk, így harmadik fél nem tudja veszteség nélkül összegyojteni az átvitt adatokat, ez a veszteség pedig megakadályozza a tartalom titkosítását megfejtését. A digitális jelfeldolgozás új eszközt biztosít minden átvitt karakterfüzér azonosítására és összegyujtésére.
10.5446/18842 (DOI)
Hi. So, I didn't even start yet. Okay. Welcome to my little talk here. It's about web apps and hackers and you. And first, I would like to introduce myself a little bit to you. So, Janchen or Jan, my handle is the next version of my passport name. By day, I'm doing security consulting at security labs. So, working for our keynote speaker or in this company at least. At night, I either hack other stuff which I'm not getting to hack at work, but more like fun privately or I go out and get wasted or spin a bit. My fetish kind of thing is Ruby on Rails applications. So, many examples today will be based on Ruby on Rails. But I hope you can generalize from those to other platforms, languages and frameworks. And the most fun CVE I ever had was a format string issue in pseudo back in 2012. I still getting emails for that bug where people get this actual bug as an assignment to exploit it. So that was fun. I'm on Twitter and the stuff down there is just my DPG key. So, if you want to contact me securely, you set key. Okay. But let's start with the content. What is all about? So, I've been asked to give a presentation here. So, I had to come up with something. I want to not only show some patterns of vulnerabilities in web applications, but more also kind of solutions where what you should keep in mind if you're developing application or if you're deploying or running infrastructure. So, mainly this is about developers, developers, developers, developers, developers. Also, it's about code. So, here in the audience, I guess, are some of you, please raise your hand if you're actively developing some kind of application, be it the web application or non-web application. Okay, not too many. That's all right. So, developers write code. That's not the only thing they do. They get bug reports. They have to fix their code. They have to maintain their code and have to live with it. Once you started it, I guess it's hard to get rid of it. So, you got to maintain it to a certain extent. And we're, well, I'll try to show you some things you should keep in mind while maintaining your code from a security perspective. So, the non-goal of this talk is making fun of anyone, not even developers. Actually, I want to point out some mistakes. You can trip in and some approaches to get rid of a good amount of security issues just by the way you're dealing with your code base and dealing with bug reports or security reports. Okay, we almost have this. The developers I've seen now, who of you is kind of a project manager, product manager, like, hurting developers? No one? One? Okay, at least one. That's good. And who's a hacker? Come on. No hackers here? Just activity. Come on. Okay, a couple. That's fine. Cool. Cool. So, hackers. Oh, my God. So, this is hackers. Like, guys with ski masks and the hammer smashing your computers. Actually, there are certain types of hackers. The ones with the ski mask, the black hats, which are the real bad guys, the ones with the white hats, which are the good guys, which save the internet on a daily basis. And there's something in between if you have a ski mask and a white hat at the same time and you may or may not do something shady when you're called a gray hat hacker. So, white hat, non-malicious, white hat, pure evil, are aiming all the boxes and gray hat, yeah, malicious, at will, at somewhere in between. May or may not be friendly. Just for the terminology. But then, the problem is security is hard. And I mean, not in a way that this exploit does not work because A S L R is in place or you cannot pop a shell because there are some rag acts that cannot bypass. Security is hard enough for the hackers, but it's even harder for the guys who write the code to defend because that's the way harder problem to defend your whole code base because one hacker needs one bug, which he can successfully exploit. And you need to close all the bugs because only then you're safe enough and still things might go wrong. So an important security feature are passwords. Who of you does not have a password? Good. So who of you has used a password reset in the last half year? Password reset is a pain in the... It's painful. So usually it works like this. This is a real specific example. You fill out the form which says enter your email and you get a link to reset your password. This link would contain some kind of secret and then you redeem that secret for a prompt where you can input a new password or even are being logged in directly. In Ruby, it typically looks like this. We have our super random secure random token and we'll find the user by its reset token. That's a very simplified thing. So this is how it's usually done. And there is a problem, at least with MySQL. I have to switch to MySQL now because that's important for the password reset process we just saw. If you compare a number to a string starting with the same number, MySQL will return that's true. Or if you have a string with a non-numeric start and compare it to zero, MySQL will return as well. This evaluates you through in MySQL. That's almost like PHP type juggling. And we can benefit from a tech perspective in Ruby and Rails applications because with XML in the older versions and still with JSON input, we can actually give in the parameters a numeric, not a string of like the string one, but the number one, like the fix num in Ruby terminology. So to demonstrate this, here we got a legit password reset. So we redeemed the full token. It's a very long, very secure token. You could never guess it and you set your new legit password. And then Hacker comes along and uses the number zero, which would match this token and sets its own password. This is pretty broken. And a lot of Rails applications made this mistake. For instance, the Black Hat CFP system. So I was able, without even submitting anything there, becoming a review board member. Just by resetting random accounts. And one of those was a review board member. So I could look at all the talks and maybe download them or upload them. But instead, I was a white hat and told them and, yeah, was invited to Vegas. Device is an authentication plug-in which is very popular for Ruby on Rails applications. They had the same problem. And this was pretty widespread. So there was a real issue because you could actually reset random accounts. And if no password reset is pending on any user, you would just go and type admin. Admin would get a password reset link. But I didn't do this. Someone pranked me. But you could, with sending a number instead of a string, reset its password. Well, pretty bad. But so the main point here is not only that we can reset passwords. But whom would you blame for this? Who's fault is this? Like, it could be, oh, my God, my SQL is stupid. Really stupid. It compares strings to numbers into the dust weird things instead of just throwing an arrow at you. You could say Ruby on Rails framework is stupid for letting me pass numerics instead of strings or not telling me that this would have side effects. Or the developer is stupid because he didn't know. Or I am stupid because I disclosed this technique because I am a bad hacker. So I think you cannot really blame anyone. But it's a big, complex thing, a Ruby on Rails application. I mean, you get a web server, a database, the framework, the programming language, the developer who uses all these tools to create something. And you have to be aware of a lot of side effects and cannot really judge if it's good or bad because there might be stuff which isn't even documented. Which even the framework developers didn't know. So I would really not blame anyone. But it's a problem. So password reset, part two. Own cloud, so this time PHP. They work like this. So we compute a sha1 of the username and unique ID. First of all, this is not a good random source. So you might be able to predict this token. But it was worse. So unique ID would give you time prefixed identifier which is supposed to be unique but not random based on the current time in microseconds. And back in the days when this was discovered, the case that this unique ID always started with 4F. So then the plus operator, if you add admin with plus to the string 4F something, it would evaluate to the number four. That again would be cast to a string of the number four and then put into sha1. So the globally working and only password reset token back at this point of time was the sha1 hash of the number four. Unless your username started with the number. But that's a corner case we can ignore. So yeah, that's pretty fucked up. And I mean, if you're using PHP, you should be aware of at least of this typecasting weirdness because that's documented quite well. But yeah, resetting passwords. So another reason why I pointed out password reset here is when I audit the application, I usually look at the authentication authorization parts first. So password reset is a crucial feature which can allow unauthenticated access to an application. And therefore, if you're developing an application with a password reset, special care should be taken not to screw it up too much or not to screw it up at all at least. So guess what? Something completely different. Just another password reset. This is again, Ruby on Rails code. This used to be, or this is actually still is a Ruby on Rails challenge I've put up. This password reset mechanism is a verbatim copy of a password reset mechanism in this course which is like an online forum thingy. And I reported this and used this example as a nice challenge. So I'm going to spoil my own challenge here. If you don't want to hear it and play it yourself, you should leave the room. But please stay here. So it's just a bit complex. So we try to find the user by its reset token. And if we don't find it by its token, so if the token doesn't match, we save. Oh, yeah. We don't find it. So we go to the else tree and we pull out the user ID out of the token parameter out of the session. And then we find that user. And then if we have a user here, we can reset the password. Problem being here, if we got a free lookup in the session, so we can say which key in the session hash will be used to find our user. And that must not be password reset token, but it could also be a CSRF token. The CSRF token is a string which may or may not start with a number. If it starts with a number and we pull out the CSRF token here, we just say instead of a token, we say underscore CSRF underscore token. And make sure our actual CSRF token in the session starts with a number because we can see that on the website. And we'll hear the find method would cast that string to a number. And we find the user with the user ID of whatever our CSRF token starts with. Typically, you want to use the number one because then you get the admin account. And then we could reset admin's password. The nice thing is we don't even need to trigger an email to generate a token because it is completely out of the session. So here, again, it was a bit, I wouldn't say stupid, but brave to allow to pull out a session value by a user given key at this point. And their fix was actually to prefix the session parameter with the password reset something, so with a fixed string so you couldn't inject arbitrary values at this point anymore. That was like the main part of their fix. But what do we learn? Well, password resets things because it can break in so many ways. It could also be that your tokens are predictable because you're not using a good random source just by accident. But the problem is we need it because people forget their passwords. They lose their key chain files or they get drunk and don't know their password anymore so they get resetted because they go drunk shopping on Amazon. Well, that's a problem, but actually it's just an example for the whole problem space because building secure applications and not only web applications is really, really hard. You just start, I mean, I myself started coding some tiny upload script and I was like, I was done and I was like, wait a minute, here's the directory. I shouldn't do this, why would I even try to do this? So the problem is. So the problem is. Thank you. And if it's not the white hat guy, which tells you about it, it will be the guy with the ski mask and he will not tell you, but he will rip off your database and be even more mean than the guy who discloses to you. So in a perfect world, everybody discloses like good guy Greg. He doesn't even want a T-shirt for it. So I could run a bit about backbounties and T-shirts and also things here. But no. So for the developer part, if you get a disclosure about something in your application like scripting, SQL injection and whatnot, what's next? Well, you fix it back and thank the researcher. You may give out a T-shirt or a backbounty, at least credits because heck of life. Laugh fame. Well, you could do this, but no, no, that's not good because you want to go a bit deeper. Try to understand what's being brought to you because actually you get, by getting a responsible disclosure or a security alert from a third party, you just got free consulting or free, kind of a small free audit and should be thankful and try to find the root cause. What went wrong there? Can we include a test for this? So in our continuous integration or unit testing, so this will never happen again and we don't have to give out a free T-shirt again. Can we find other instances that the guy who reported this did not see so we can fix even more and have additional benefit security-wise from that report? And you should really feedback and tell whoever reported this to you, could you verify this is fixed now because you might just have a fix which isn't 100% correct or working. So you might have overseen just another corner case. You should really, really make sure that your fix is kind of bulletproof and the most important thing, learn a bit about the issue and improve because you just got free knowledge. You should use it and improve your code and your whole infrastructure or code base upon it. So another example. Surprise from the man page. So on Gollum is I guess some of the GitHub guys started off this project and it's a Git-based wiki where you can online edit markdown files and in the back end it would keep a Git repository in order to keep track of the changes you made to the wiki. Good. So this wiki has a search method. This is Ruby code again but not exactly Rails but Sinatra but the double matter. This is where our search ends up. This query argument we can control the rest not. At this point this will be passed to the shell and invoke Git-grab. Good. What do we see here? Well, this options just assume it's had and we put here and right before is our query and we have here an array. So this will end up as the good way to call exec. You learn you shouldn't just have a long string with a shell command because then the hacker can come along and put in a semicolon or back ticks or dollar brackets, could inject commands. So this way of shell meta-character injection isn't the case here but something which is quite underhanded and often not seen, we can inject arguments to Git-grab. The actual query will become an argument starting with a dash and something and the ref will be had so we will have the search term being had instead of whatever is supposed to be searched for. So by consulting the Git-grab man page we find the option dash capital O which stands for open file from pager. Sweet. So the actual vulnerability and the actual exploit for this was to search for dash O whatever shell command you want. You could even put in a simple net cat, back connect, shell code, so after the dash O you could put a complete command line. Yeah, that was funny because they thought we are safe here because we are not allowing shell characters but they weren't aware of the full spectrum of what Git-grab is able to do with its arguments. So the problem is kind of an invent signaling which is not too obvious because post-success here, arguments and options are mixed in the same command line and that's where this screw up came from. So it all looks nice but in the end you forgot one little thing which then is a disaster in terms of security. Right. So there were some showcases to password reset and a little man page exercise. We have seen several ways of how a password reset could fail like token being predictable or by possible by a number instead of string because my school is weird but how to improve. So can you read this? No. Shit. Okay. Basically this is the Git log of the AV library of Android right after JDAG reported stage fright back to them. So I guess you all have heard of stage fright. So what you basically could see if you could read it here is like a whole page of commit just about integer overflows. Like enable integer check in here, fix a small non-exploitable integer overflow there. So it looks like when stage fright and this is a non-web application example but still when stage fright was reported to the Android team they were like, oh my God, integer overflows. We never check for this. Let's quickly just go nuts about this. I guess they learned something and I hope they keep it up because you should, if you learn it, not forget it because then it would make the same mistake again. You should try to generalize the issue as I said before and try to find the pattern which you can somewhere use to find other instances in other parts of your code, in other projects and you should try to apply it to whatever you're maintaining in order to get rid of this type of issue or bug. Even better, you should maybe read mailing lists like OSS or not full disclosure is that Felix just told us this morning in the keynote. So you should have a good source of information where you can find examples of vulnerabilities which then you should take, read, understand and think about them like would this affect me somewhere in my code or can I check for this type of issue even before it's introduced because someone in the team may do this mistake and if we got a test for this, it never can happen that you can commit because the test will fail. You will say, oh my God, my bad but I'll never do this again and I'll fix it in a proper way. Well, this brings us to tools. So you should try to find a fitting tool chain for your project, for your application in order to have a good baseline of security scanning in your build or testing process. That even is true for the hacker perspective. If you have a good framework for automatically find bugs and fobs over there, that guy, he will tell you about his tool tomorrow. So he has the bug generating machine. But I want to focus on the bug avoidant machine. But be aware. Well, a foo with a tool is still a foo. And I don't know if anyone remembers this ancient CVE. Open SSL in Debian. Raise your hand if you remember that one. Okay. A couple of you. So back in 2006, a Debian maintainer was like, okay, open SSL, quite important piece of code. I'll run automated scanner called Valgrind on open SSL. And Valgrind would complain about one uninitialized variable and would tell this maintainer, hey, look, there's an uninitialized variable that's not good. Okay. The maintainer said, well, let's just initialize it. And commit this as a security improvement or whatever. And that will be compiled in every Debian and Debian-based distro around the globe from 2006. And in 2008, someone figured that this mistake was made because actually the variable had to be uninitialized in order to contain enough entropy for the key generation in open SSL. Like every RSA key was basically just depending on the process idea of the process which created it and not any additional randomness. Oh, shit. Yeah. So we had a pretty nice, tight key space and could just generate every possible SSH or X509, every possible RSA key in a given size which was created by a Debian between 2006 and 2008. So it was pretty bad. So lesson learned, try to verify what your tool tells you because it's just a stupid computer which tells you what to do, but you should tell the computer what to do. That's the right way around. And I mean, if you ever had a web app scanner running on a web application and get a 500 pages report and you're told, being told, could you verify that's equal injection stuff and no, that's terrible. But yeah, so don't trust it blindly. Another thing, this is an example for another thing you should keep in mind when building an application or building a whole landscape around it. So that other day I was poking on GitHub because they have a really nice security team and fun bug bounty program. So GitHub has some internal tool called GERF which is basically looking at your SSH pub keeping a print and then pulls out your user out of their database and resolves your permissions if you might with that key access this repo or not. So it would, if I connect to Git, by SSH to Git at GitHub.com, it will look at my key and see I want to clone fobs repo, but I may not because it's private and I'm not fobs. So think of it as a super smart version of Git shell. Well, yes. So the front end SSH picks up your SSH connection, looks up some stuff about you which is signal wire environment variables and then it would start a second SSH process on the same host to the actual backend where the repo is. You want to clone or push or pull or somehow access. And I figured by messing around with my username on the web interface I could inject environment variables to that second SSH process because it somehow got a variable which said username equals junction. So I polluted the environment a bit. Basically I was able to inject new lines in my username and introduce new environment variables by this. So that was the actual payload to actually make the second SSH process spawn a shell for me on GitHub.com. So I preloaded libfake root because then SSH, the back end process would think hey, I'm running as root. Cool. I cannot access slash root.ssh known hosts and I need to ask for a password because I actually wasn't running as root. So then I set a display variable in order to trigger SSH as pass while ended up in user bin X so VI without the V, the visual stuff and could just type an information mark, whatever shell command. Wonderful. What tells us that it's not always what I call self-contained issues. For instance, a SQL injection or a buffer overflow. You can just see in the code if there it's, if you're used to read code, it's yelling at you and saying hey, look at me, I'm a vulnerability. But sometimes it's even more complex because you have environmental issues. You must see whatever you're running on your servers or somewhere in its context because the environment is running in influences obviously the whole application in a way. So you should, from time to time, try to switch the perspective on what you're developing or even if you're auditing stuff, you usually get better results if you try to change the perspective. Not only looking at the login procedure but also what the web sort of passed to the backend maybe some had a variability I haven't even thought about. So a bit more or a bit of mind andness helps a lot. Well, five minutes. Okay. So schema time. Become a hacker. Actually, that's optional. Actually, I just would like to point out try to be the attacker yourself if you're developing an application and try to question a bit what's written there because that's what those hackers usually do. They look at the code and question every single statement, every single line in order to find a flaw. And think about threats. So we got a threat here. That's Luke Skywalker. And we got our asset which is the death star and our asset, the death star, has a small, tiny security vulnerability and this is the whole way Luke Skywalker flew in and blew up the whole death star. So we got threats. And what is next to threats? Models. Yes. More like this. More like a blueprint. So we got threat modeling. That's great too. Who of the guys which are developers and great their hands does threat modeling on their code? What? Okay. If I come back next year, I want to see all your hands up. It's really helpful because if you have kind of a blueprint of your asset which in this case is White House which is a pretty high value target, you should try to, I mean, I only can give you points because I can't explain the whole way of threat modeling right now, but you should try to decompose your application in a way that makes sense to keep it a bit modular and to get a good overview and identify where are my boundaries, especially my trust boundaries, where is the user input, where does it stop, where is it sanitized, where are the threats? By drawing more or less abstract picture of your asset or application and drawing lines as boundaries, you would almost instantly see what are the threats. You might have to think about it a bit, but you would see where can they happen and how would they happen. Then you go on and mitigate the threats and then you're not done because you repeat it over and over again because yes, I know security is not a thing you would just plug in. It's a process, blah, blah, blah. But for a formalized approach on this, you would look up Microsoft SDL or Stride, which are the generic model of threat modeling. I wouldn't require anyone to just follow plain SDL Stride, but to adopt it in a way that fits your needs because not everybody develops like Microsoft does. But that's the stuff they use themselves. So over and over again. That's like the biggest problem I see is that people don't learn from other people's mistakes and the same types of bugs popping up over and over and over and over and it's 2012 and we still got format strings, exploitable format strings and stuff like pseudo. Come on. Why? Let's be pessimistic for a moment. Why does this happen? People are people and maybe they must fall down on their own in order to see what the actual problem is. People don't learn. That's bad for the product, for the software, for the whole software landscape. It's good for me. It's job security for me. Because if every now and then a new cross-ed scripting gets introduced to the stuff I have to test, I'm happy. I find it back and I get booked again. But we should fix this once and for all. There are approaches to start actually computing from scratch. Like start with unicernals not written in C or the whole clean slate approach. Actually to be realistic, I wanted to point out some things and I hope they made it through so to be realistic, do not dump here. Don't be dumped. Just try to learn. Never stop learning. Try to learn from the mistakes and repeat looking at your code and doing like assess yourself and try to get a bit of the mindset using the tools or you have and you can afford or can put in there. So actually that's my last slide. Thank you for your time. My counter here is at zero so I think I'm done. And if you have any questions, I don't know, do I have time for questions? I don't see the next speaker yet. So other questions? Okay. Thank you.
Modern Web application frameworks offer a vast amount of ways to introduce security vulnerabilities. In this talk we'll have an overview of common and not so common patterns of vulnerabilities. The main focus will be Ruby on Rails applications, but also generic patterns which apply to other languages and frameworks will be elaborated. Instead of just showing off with 1337 bugs and exploits, mitigation strategies will also be provided.
10.5446/18841 (DOI)
We are writing the year 2006 and I'm providing network and security consulting to an online gaming company acting mostly in Asia Pacific. I'm called in by the weekends because a strange performance of the servers and all kind of interruptions of the servers they are providing to their players. After first investigation I'm finding out actually that the problem they have is not at all related to their own network, is not at all related to their own servers and is actually not at all related also to all the traversal network devices that players actually are traversing in order to get to the gaming environment. I see from player perspective very very big delays, jitters and in some stages even total impossibility to reach the gaming servers. Understanding that the problem is not on their side we are calling the co-location provider actually the environment where the solution is installed and telling them that there is a problem. They don't know about nothing, they begin to investigate on their sides and very fast actually also understand that everything is external to their border routers nothing is actually wrong in their network. So on their side they are doing the next step and they're calling actually their major ISP that is providing the big pipe that is actually used in order to have the internet traffic getting to the site. When they're talking to the ISP the ISP tells them that actually they have seen some problems in the network but they are also not really aware about what's going on and they tell them we will call you back let us just check out what the problem is. After a while the ISP is actually understanding that they are dealing currently with a highly volumetric attack on one of their other customers. They're contacting the customer and telling them look we see here very very heavy traffic reaching your servers can you please take care and can you please assist in troubleshooting and in seeing what's going on. The customer is beginning to try all kind of things together with the ISP and actually after many many hours of troubleshooting trial and error the ISP understands that he will not get probably the solution from the customer and is going actually to the most radical solution. He is actually deciding to narrow out the entire traffic of this customer or what we call also in networking black-holing and actually from this moment everything is back up and running for all the others of course also for my customer but actually his customer that was attacked is out of service. I think it took something like 10 hours when the attack was over and then also his customer was able to return to service. Now what we see here is actually four entities different entities in 2006 that are all unprepared. First of all it is my customer. My customer did not have the right tools in order to understand that actually when the service was interrupted or was hardly harmed it is not related at all to his servers and it is not a related at all to his own network infrastructure. The next thing was that also the co-location provider was totally unprepared they did not have the tools in order to understand themselves that there is a problem and furthermore they took actually an architectural decision that they would rely on one major ISP meaning that at the same moment when the link or the network of this ISP is separated they don't have any alternative and actually their entire customer base is damaged by the coloraddle effect of this attack. We're moving on. We're moving on through actually the ISP and here again we have somebody who is unprepared. Not at all thinking that there's a possibility and remind you I'm talking about 2006 nobody actually thinking of that there is a possibility to get highly volumetric attacks and I must say also that when we're talking about a disagis Asia Pacific it meant that it is at some island with anyway low bandwidth capabilities so also this ISP was totally unprepared and finally also the customer the customer that was the victim of the attack was also not prepared so it took this customer also very very long to understand that they're under attack and actually the problem was also that the customer was not able to do anything and as I mentioned the finer resolving of the problem was having the ISP black hole entire traffic. I would like to ask you the audience here and I don't know how experienced you always did us attacks but probably you read a lot of news do you think that something like this can happen also today ten years later? It's an interesting question but I can tell you that there is only one answer yes unfortunately it can happen and it happens all the time so I was told also to say some things about me and my name is Jochen Hanse-Sommerfeld that was mentioned I'm the CEO of ComSAP group an Israeli company providing security services security professional services in Israel and abroad with a subsidiary in UK as well as in Netherlands and I have many many years of IT and security experience in all kind of areas. ComSAP is providing as part of the services and a DDoS simulation service that is actually giving customers the possibility to check and to validate their preparedness and readiness for DDoS attacks. What we are doing there we're actually simulating a real attack on customers while we are working with three teams. One team is the Red Team a team of experts that is responsible to attack the target system to attack the customer system. Another team is the White Team it's actually the customer himself that is doing the same thing he would and should do when we have a real attack and is actually trying to fight us in order to make sure that whatever we are trying to do is not impacting and in any way the service that we are testing. Then we have a third team as I mentioned the Blue Team this is actually sitting next to the customer is responsible for the communication between the White Team and the Red Team because you may understand that although many customers are sometimes in a confident level that there is no chance that we can bring them down it works always and then actually we need the possibility to push the button and to stop the attack immediately because we're dealing with real services and second what is also very very important is to provide the transparency of the impact to the management that is ordering the attacks so that we make sure that when we are attacking we get full transparency of the results and this is of course very very important in order to give the possibility to remedy it or correct the weaknesses that we find in the system. Furthermore before joining CONSEC I was for more than seven years the chief information officer and chief security officer of play tech the biggest online gaming platform provider in the world and there I had the opportunity to deal with a lot of a lot of very aggressive and DDoS attacks very volumetric DDoS attacks I've seen DDoS attacks of 250 gigs and what I'm trying to do in this presentation is actually to bring together both things one is my experience of many many years with dealing with DDoS attacks as a defender and this is probably what most of you I hope at least are doing on the other end due to the fact that we are running a DDoS simulation in CONSEC I have also the possibility that I integrated in this presentation all kind of experiences that I got from our DDoS expert team that is part of those testing simulation exercises so that actually we're bringing two perspectives together one of the defender that we're all used to and actually also the one of the attacker I know that for the technical audience here it is always very very boring to see numbers so I will just really very very shortly talk about some trends in DDoS I think it's important to mention them and then I will really immediately go to DDoS case studies or actually exercises so that you will see common practices that are used in order to mitigate DDoS attacks and then I emphasizing common practices because common practices are not always best practices and this is very very important to understand next all part of the of those cases I will point on all kind of technology limitations that we have okay that are in many many cases unknown and important when you are focusing on DDoS mitigation or when you're planning to deploy something and last but not least I would like also to share with you a DDoS mitigation lifecycle that assists to avoid many many problems and mistakes and that I have personally done and many many people in the industry are doing so let me introduce to you our DDoS target I decided to use a virtual company name that is targeted as nothing to do with the target that was attacked several two years ago so it's really important that you don't connect it to target it's not the real target but I think it's there is no better name for a target company than target so you see here actually a variant I'm sorry also for the quality of the layout it seems that the stream area is not at the right resolution so it looks very bad but I hope that you can at least understand the idea so we have a really an ISP backbone we have a very very classical at the moment not put any redundancy in network of some online company that is called target could be anything it could be e-commerce banking etc it has a router a border router it has a load balancer it has a firewall in this case you know also including an IPS not I think you can you hear me okay also including an IPS furthermore we have a front-end servers and we have also of course back-end servers and I thought that it is not needed to go into more details and about the architecture now as I mentioned I would like just to mention some numbers that you are aware what's going on in the world of DDoS things that are currently a comparison of Prolexic which is today part of Akamai and that is mentioning that we have a 133% increase in DDoS attacks in the world over the last year the amazing thing is that we see a clear dominance of layer 3 and layer 4 attacks and here I would like also explain where it comes from it comes mainly because also the attackers are acting under economical assumptions and what it means is just that it is so easy to attack successfully customers or victims with layer 3, layer 4 attacks that there is actually no need to go further and if there is no need to go further and there is no need for more investment they will not do that. We see an increase in the average attack duration so if you see if last year it was about 17 hours we are today already about 20 hours or almost 21 hours and this is something that is really the average so I've seen the text that are going on about month. We see actually that there is a decrease in the peak bandwidth okay so if you look this year you see that the peak bandwidth is not something that we have seen in last year you have seen all the time an increase it's not at the moment increasing but what we see is definitely that the volumetric attacks over 100 gig are doubled okay so it's doubled than what we had last year and here just you know last slide I don't know even if you can read it but what it says in general is that the most attacked industry is the gaming industry this is where I also got all my experience and you see of course also gaming technology sorry banking technology also industries that are very very actively attacked by DDoS attacks okay so now what I would like to do in the next half an hour is to give you an idea about real and simulated DDoS attacks on target again mentioning it's not the real target it is at the moment just for the presentation the company that I call Target and what I will try to show in this presentations or in this cases is really also that many many of the solutions of the concepts that are implemented and are actually sold as bulletproof are not the medicine against DDoS and are not always doing the job okay so here what you see actually on the layout is target our customer and we see currently that we have several botnats coming from various autonomous systems attacking our target company what we see here currently what would we like to talk about in the real in the first real attack is actually layer 3 DDoS attack of volumetric nature and in this case you can see here that targets still do not have any specific DDoS mitigation capability implemented in the network they have all the standard things we know but there is nothing related to DDoS and actually they're currently attacked targets security team understands that something is going on it is actually trying the first thing something that naturally all of us would do when you are attacked you're looking who is attacking you so what they're doing at first stage they are actually beginning to search for IP addresses that are coming out as not valid connections and are beginning to block them with access lists on the routers very fast they understand actually that the attack is ramping up it begins to be too strong the routers are not able to deal with it they are contacting their ISP and telling them look you have to assist us please block IP addresses so that they will not reach our data center and the ISP is doing exactly the same is beginning to block big IP address spaces that are provided by targets and is actually trying to get control of the attack the problem is that what we are dealing here with the high volumetric attack is actually UDP packets maybe spoof twin packets and actually the amount of IP addresses that we are blocking is so big that what happens actually that they are maybe having a bit less stress on the servers but actually also clients real clients are not reaching their site so actually this is not helping very much then the ISP tells them you know what I have a very very good idea what we could do actually is we could begin to work with some kind of access rates so let us limit the amount of new connection setups and let us anyhow limit the amount of concurrent connections to the site but again the problem is that when you're doing something like this and for sure when we were talking about the volumetric attack it will not assist you a lot and this is exactly what happened because you are not only disconnecting the evil that is trying to attack you also legato made customers so at the end of the day what happens here is that and they are ending up in a situation where the ISP is giving them one additional advice and tells them look maybe you can tell me exactly which services you are running on your environment what actually came out is the target was attacked by all kind of UDP UDP ports all kind of TCP ports none of them were related to any service running on their site so actually the last advice they got from the ISP is let us block all those ports that we don't need they blocked it also on ISP routers they blocked it also on the on target border routers and from this moment actually the attack was over and everything returned to normal what we see in this specific attack is clearly that if they would have discussed before the ports and would have configured them properly probably this specific attack as it seemed to be an opportunistic attack and not a targeted one would not do any damage to them so from my perspective just as a side comment at the moment is that you should always even though you don't have yet in your organizations any kind of specific DDoS mitigation device implemented you should always study your network and the protocols that you're using very well and make sure that you are already filtering things like wrong ports at the border because you know that when you look at the chain from the border router into your organization as later you're doing this filtering as more expensive it is because the parsers are working significantly harder so doing something like this on router level is very easy and if you have even the opportunity to do it together with your ISP then he can do it even already on his side and it will at all not saturate your link which is definitely a very very good thing to do but again all of us you know we are attacked we didn't operate very well so we are asking ourselves what is the next thing and of course target was sitting with their experts and was sitting with their ISP and they got the solution we need scrubbing scrubbing devices and usually deliver the protection by diverting the traffic through the scrubbing center what does it mean it means that normally when the traffic goes to your service at your servers and you understand that you're under attack or you're doing actually you are normally connected with GRE tunnels or any kind of specific tunnels that allows you to be directly connected with the scrubbing center and with BGP advertisements you are making sure that the specific address spaces related to your service are routed through the scrubbing center that means that you need first of all technically you need to be aware of the fact that your traffic is only seen by the scrubbing center unidirectional so only the ingress traffic to your site is going through the scrubbing center and not the entire they're not the returning traffic so have a look here you see that actually as well the attacking traffic from the botnets and the real user traffic is actually the same moment when you decide it and you did the BGP updates is traveling through the scrubbing device what happens now here at this stage is that the scrubbing device is throwing away bad traffic and is actually sending over the good traffic to your site so the entire idea is actually to have a clean pipe from the scrubbing center to the site I would like to take here another real attack that I have seen were target again and again you mean you understand target it could be many companies it's not always the same company target was again attacked by a volumetric attack this time 70 gig and as was promised by the experts by the ISP by the scrubbing service provider they did a real real good job and they were able to bring down the attack of 70 gig to 120 megabit per second an amazing job but what was not discussed with targets and told them that even though all those scrubbing service providers and all those scrubbing device vendors telling you that when you are using them you got a clean pipe to the data center it is not entirely clean it is actually still in specific granularities containing bad traffic and this bad traffic of 120 megabit per second was too much for the infrastructure so the old routers that they did not replace were crashing again so look what what a nice thing right we have an amazing service that is able to cut down 70 gigs to 128 mega and still we are suffering another thing that they did not consider is they were actually not taking enough bandwidth commitment from the ISP so what they did actually they were only considering bandwidth related to their peak traffic that is real traffic what you need of course if you would like to tackle with such an attack you need to have some spares so there were no spares and the problem was that also the network itself not only the routers were saturated so what we have here is a very very problematic situation and where we see that although we have a very efficient solution for scrubbing we are not able to take care of the attack now I would like to come to simulation attack something we did in our company what we did actually we attacked the customer with a very simple attack a sin attack but what we did actually was that we were doing it in a highly highly distributed manner meaning that actually the amount of concurrence or the connection rate of the sin attack from specific IP addresses was very very low but this very very low was consuming all together aggregating all together took quite a nice amount that was a joke for the scrubbing center okay but what was the problem the problem was that actually the scrubbing center as I mentioned sees only one direction of the traffic now as the traffic was coming from so many IP addresses they were not considering any of those things as an attack what is the common solution that you all know for years on firewalls on load balancers is actually sin cookies right so what you're doing actually when you have a feeling that something is going wrong you are actually using a proxy for delayed binding and you're working with sin cookies in order to let the other side respond and then you know if it's a real client trying to access you or it is just a sin attack but the problem is that in order to do that you need to see traffic in two in both directions you need actually a proxy and as mentioned this device here the scrubbing device does not see the returning traffic and therefore this was not the possibility or capability that could be provided so here again we had a problem that and we were not able although we have a great device in place to mitigate this attack I would like at this stage also to mention that those scrubbing device vendors understand of course the limitations and they thought about all kind of solutions and they are solutions but unfortunately none of them are working perfect one of the solutions is that they're actually running resets the problem is that when you are running resets you can get very very serious problems with protocols that are time sensitive or services that are time sensitive for example in the industry that I was before if you have a big poker network many many players playing together and you just disconnect them you may understand what it means it's a big problem another thing they came up with is actually HTTP redirects so you are coming with HTTP and I'm redirecting you and this redirection gives me actually the possibility to validate if the attacker is or if the source IP that is coming to me is an attacker or it is a real client but this is of course also self-explaining only working with HTTP it's even not working with HTTPS so when you're an SSL this is not a good solution so they came out with other things like out of sequence solutions like a hack so an out of sequence hack and if I have a real client on the other side he will actually reinitiate but as you probably know many many of the network security devices are actually blocking those things because they are actually following the states and when they understand that something you know is not in order they just block the packet and it's not going through so what I'm trying to say is that even though you have a very expensive very good solution in place it is doing a lot of good things but it's not good enough another very very important thing is to understand is also amplification attacks in amplification attacks we have the same problem that we are actually using open servers DNS and TP etc and again we don't see the returning traffic at those scrubbing devices and therefore it is also very very problematic for us to take care I'm in time I am right okay so after having all this trouble after being already tired that we spent and targeted a lot of a lot of other money in technologies we were sitting again or the customer was sitting again with all the experts and got the next very very important tip you have to go with an inline appliance in an appliance that you will install on site and here you will be able to overcome the problems that you've seen before so let me just say what it means in line in our case means actually that we are still of course using the scrubbing device but actually when traffic is entering our site it is going through this device that is now seeing both directions of the traffic and is actually also taking care and deleting attacking service but we should be aware that also those inline devices have their limitations and I would like to show now you know a text scenario 5 where actually you see here again the same botmats and therefore it's always the same layout nothing is changing you know it's just maybe changing on protocol level and here again is a tech relatively massive and and again very very good mitigated on the scrubbing device and let me just see that very very good mitigated on the scrubbing device but what happens actually that the inline device that was now responsible to take care of all those things that the scrubbing cannot take care was not configured very well and it did not do the job so what target decided with their network engineers okay we have to connect to this inline appliance okay that is installed somewhere remotely in a data center in order to adjust the configuration and to make sure that nothing wrong gets through surprise surprise again investing a lot of money in good technologies but not understanding that is also very very important to design the network in a proper way what happened here is that target did not take care that the management was outbound and instead it was inbanned it was actually needing to access the device through the same pipe that was currently saturated with the attack so target was not able to connect to the device and to do the adjustments in configuration in order to take control so you see and they had no other choice than waiting until the attack went over and then they could connect finally to the device and to make the configuration so here again very very important for all of you to understand good technologies is very important but you have to make sure that you are implementing and deploying it in the network in the right manner and make sure that you whatever happens can access this device so what target did was again sitting with the experts and of course understood that what they have to do is to connect to the device through the private MPLS network of their data center provider you may understand that when you are under Adidas attack then the part that is really problematic to access and to use is the public part of the internet but the internal MPLS network of the provider is not at all impacted and therefore at the same moment when you take for example this is a solution and there are also some other possibilities then you make sure that no matter what happens you can access all internal devices and make you adjustments and we have to understand that exactly in such a warfare the possibility to be dynamic when you are under attack and being able to change things is super super critical another good example that is again also pointing on architecture so again we decided to attack target now they are very very sure that they know what they are doing so we made sure that we are going through the MPLS network in order to reach the device everything is settled perfectly and we are asked to attack a marketing website we're doing it again with a very very simple attack and it takes us exactly two minutes and the site is down and not the site only is down there are all other problems happening in the network so we are with the white team and with the blue team we're checking out what happened it's a very simple because of PCI requirements they were required to have log reviews on daily basis for this front-end server as for the old the other servers were actually payment information will flowing through as most of you of course don't have the possibility to review logs on daily basis all of them are going to the same solution and they were actually using the same setting up alarms which is definitely considered by PCI the assesses a compensative control for daily log reviews but what happened here is that they did it for this poor front-end server and it was just you know filling up the seam and they were blind and so nothing worked anymore of course the server was interrupted itself but also the security team was not able to see anything that is going on in terms of security in the company now you may understand that there are many many DDoS attacks today that are actually smoke in order to have other attacks running next to it and here exactly an example security team totally blind okay and by the way I've seen also situations today we are in the age of big data we like to collect everything and here also you come very easily to to a point where you might have too aggressive data collection that is actually hurting infrastructures and making your trouble not specifically on the server that is attacked but on other things that are influenced or collocated next also in this situation what we did we said okay let's try an amplification attack and what we did actually we did a file they had a file server we were attacking the file server with the 1k request and we were not required to do too many requests and therefore also the inline device was not recognizing us as any kind of an attack and what happened actually that we were able to fill up the bandwidth capacity with the returning traffic because it was a 1k request bringing back a several megabytes of file for each of those requests so this is also very very important to understand that you might be in configuration in the situation that you are reducing the amount of connections you're reducing the amount of the concurrent setups but at the end of the day it might be amplified with the request return and therefore it's also very important to consider those things okay see that I'm very very close to the end of the timing so I would like just to mention some other things and that are also very very important to consider in many many situations customers are using CDN solutions for DDoS mitigation please be aware that this is a very effective solution but it might be very problematic when you're using dynamic content so it will work very very well for static content highly distributed it's a big problem for dynamic content and I would like also maybe touch one thing that I was even asked yesterday when I was meeting with the customer many many customers are asking about how effective is the usage of gear location so here I can tell you also that we did the simulation for target again and we were attacking them from all over the world they were understanding that the servers that they are providing to customers was only relevant for specific geographical region so what they did was very very simple they were just blocking everything that was not coming from this region it of course blocked us for a moment but we understood what they are doing the same as probably each experienced attacker would understand and then what we did we were just using our botnet that was in the same region and we're again able to bring them down and to attack the servers without any problem so let me now we're just round up and if you look at the scenarios that we have gone through you see at all of them actually very very similar things I think the most important is that customers are not prepared customers and like probably it's human nature think that it will happen to others and it will never happen to us and they are not prepared accordingly and therefore what is from my perspective the most important thing before the technologies that you're implementing is really make sure that you are thinking about how to set up your network you are making sure that you're creating the tools because what you have seen as well is many many situations where people were not able to detect that they're under attack and they were wasting a lot of a lot of time with all kind of troubleshooting instead of taking care of the specific attack it is very very important to make sure that people are trained okay it's requires incident response like any other security incident and this is also something that somehow in many many cases not done so you are dealing very well with other security attacks but when it comes to DDoS we were not really thinking about it and I'm definitely asking each and every one of you to do testing so when you can use simulation sometimes you can do it yourself I would definitely not recommend to try to get any attacks from the dark net so if you need then you should really contact companies that are doing that and what is also very very helpful is to collect intelligence in order to know in advance what is happening in the industry and what might happen to you specifically okay I think because of the time I will not go through the entire cycle here we'll let you just to read through it but I will just mention identification super super important making sure that you have the capability to identify and to detect that you are under attack trace back although it is not always possible because it's very often spoofed etc etc if you're able to trace back it's definitely something that can assist us a lot if we are understanding the origins we can work with all kind of blocking techniques understand the impact it's sometimes you know we are panicking where we don't need and sometimes we are don't where we should and I think that what is most important and many many times not done when it comes to DDoS is post-mortem people are so happy that the attack is over so happy that the CEO and the board is not troubling them anymore that they're not doing the most important thing actually talking with their teams again about the attack talking about what went wrong and what can we do better in order to reduce the impact I think that I would like really to finish with the same sentence that was at the beginning of the of the presentation DDoS protection technology is far from being installed and forget so whatever you're doing whatever technologies you implement don't think that you put it in place and you're done it's a warfare it's something that requires the right architecture it's something that requires attention all the time and it's something that needs to be dynamic because the world outside that is trying to attack us is as well dynamic and what I'm saying all the time the only thing that is relevant and security that is constant in security is the change thank you very much and I will be happy to ask you answer questions
Each of these techniques can then also be deployed in few different ways. Both, protection techniques and deployment architectures will obviously affect the quality of protection while under attack. Although many organizations are failing with DDoS protection, I would say, that most of today's attacks can be successfully mitigated. But don't get me wrong, an effective mitigation requires good understanding on how the technology operates plus a deep knowledge of your network and the applications traversing it. No matter what vendors and service providers promise, DDoS protection technology is far from being "install & forget". In this presentation I will discuss common mitigation techniques, deployment methods and misconceptions of DDoS protection.
10.5446/18840 (DOI)
Hello, I'm Norse Valles and I'm talking about stuff. I'm a member of Tool and Tool is the open organization of lockpickers. We pick lock for a sport. So we open locks without keys and without force and without damaging it. And we have actual competitions for them. Some competitions are international, so there's championships and everything. If you are trying to pick a lock for sports, there are a couple of rules you should abide by. At least that's how we play it. One, you only pick locks that you own or you have explicit permission from the actual owner that you're allowed to pick that. And if you can help it in any shape, way, or form, try to refrain from picking locks that you rely on. Because at some point you will fail and then you maybe can't open or close your front door, your server room or whatever. That's a problem. So if you live by those rules, you probably will not get into trouble. Furthermore, I'm an active member of Hack 42, which is a hackerspace in Arnhem. We reside at a defunct German military base. We're blurred on Google Earth. It's awesome. And when Hackaday, the American blog, did a tour through Europe to look at hackerspaces, they also visited us. And their first line in their article was like, wow, this is the most awesome space ever. And of course, I can only agree. To start off with my talk, we'll start off with a little video. Music playing Three printing of keys. What has the world come to? Show of hands. Who thinks this is a good idea? I think it's a cool idea. I think it's a cool idea. It's a neat idea, but I don't think it's a smart idea. Well, I don't know. I mean, you're putting your keys in the cloud and God knows where or what's going to happen. Well, the fun thing is this is an ad for a Belgium insurance company that does this on the site. So at least if your house gets looted afterwards, you know, it's going to pay for it. So that's a good thing. What you did see that they put a key on a turning pedestal and took basically took a with a sort of connecting. They took a 3D render of it and from that 3D thing, they went on for that file. And why they had to do this is basically because they're Belgium and Belgium is, as we all know, part of Europe, which is not the USA. And because in if you are in America, you basically, if you want a lock on your front door, you have a couple of choices. That's a quick set. That's a Schlag. That's very expensive. So that's three choices. And basically as a residential area, you probably go for option one or two. So you have a Schlag in a quick set, which kind of limits your choices down. So what you can do then is you don't have to put your key on a turning pedestal and make a complete 3D render for it. One picture can be enough. And there's actually a company who does that, well, who did that. I don't think they're existing anymore. This was Schlussel.com. What they said, just take a picture of your key on a white background, just flat on, click, mail it to us, give us $5 and we'll ship you a copy of that key to your home address. Well, personally, I would not use my home address for that return envelope. But the idea is neat. I mean, so it just takes one picture, click, go, you have a key. Cool, right? And they actually wrote some interesting software for that that analyzes your picture. So they don't make a stupid dupe of the picture you send it. Now, first it figures out what the actual key is and then it starts looking at where the depth should be and what the depth actual is. And what you see at space number one, this one, this one, you see that red line is a bit higher than the actual key is. This is not good. There we are. I don't know. I did nothing. I burned it. So the red line is actually a bit higher than the actual key is. So what they did is they figured out the type of brand of the key and what the original code was. So the gap there is wear and tear of the key. So the copy you receive in the mail probably works better than the original you sent them. I think that's neat. I think that's cool, right? So they just make a duplicate of your key by one picture alone. I believe now they need both sides, but the idea remains the same. They can do it with one. There's actually other companies who expand on that same idea. This is Outbox. They were running in, I believe, only Manhattan because they can't be, it doesn't scale that well, because what they do is basically they are reverse mailmen, a reverse physical snail mail mailman. So they show up at your door and take your paper mail. They take it back to their office. They scan it and they mail it to you electronically. So even if you're not at home, you can reach a spam. Whoa. They probably feed that out, but I'm not sure about that. So they need access to your mailbox, at least. So what they say, again, send us a picture of your lock, of your key. And of course there are several places when you can have your mail. That's in a mailbox that is outside. Maybe that's not locked, so then they're easily done. And if there is a lock in your mailbox, they need a picture of that key. If that's behind the gate at Fence, because that's what most Americans do, they need that key as well. And you have several layers, but they didn't want to go into your house, probably because it's American and they want to get sued or the liability or whatever. So they say if you have your mail, if you have a slit in your door and the mail goes through there, so you need to be behind your front door to actually grab the mail, they'll supply you with a mailbox you can put outside your door. But the idea remains the same. I mean, so you need a picture of a key and they make a copy of it, and then they actually use them in their business proposition. So I think that's cool. Taking pictures of keys and making that. That's not new. That's not new at all. In 2008, the University of California was playing with this device. Let's say, need camera and a kick-ass lens. What you can do with big lenses is take, need pictures and zoom in a bit. That's a terrace. There's a table on the terrace on the table. There's a book on the book. There's a key, and that's the zoom in. You can neatly zoom in with big lenses. Well, you and, of course, we can see that key. Let's focus on that one. The picture is not taken straight on, but that's not really a problem, because if you look at the head of this key, this one, this looks like a K. And if you know your luck, you know that's a quick set. And the fun part about a quick set is we know what the dimensions of that key is, especially from that head, because that head is the same for all the quick sets. Of course, the code is different, but if you keep on distorting and changing the picture, I mean, that's clearly not the key, but if you keep on wrapping it until this is the correct shape, then you know this is the correct shape as well, which you then can decode to the actual code and use a code cutter and run it. That works. So this happens from far away. You just need a big lens. And this was 2008. Lenses improved. CCTV improved. So that happens. Now, for example, this is a picture shot by Ray from Ray as a member of SSDef, which is basically a tool in Germany. It's not the same, but ish. This is 2009 in Haar in Viehouten. Ray, he likes handcuffs. He's a lock picker, but he's especially interested in handcuffs. So he knows his cuffs. And he did bring this key. This is a 3D printed key of the German handcuff keys. And because the handcuffs keys for the German police are all key to like, well, bulk of them are, the normal police, they're all key to like. And if you think for that for a second, you might go, that's stupid. But if you think for a bit longer, that actually makes sense. Because if you get arrested by cop number one, and he cuffs you, puts you in the car and drives you off, hands you at the station, hands you over to another cop, it's quite neat that he can open your cuffs. Otherwise that's going to hurt. So yes, they're key to like. In most countries, that's the case. And Ray was in this big event in the Netherlands. And it's that big that we actually have police working about. Dutch police. And of course, Ray B and Ray, he was like, that's a lips. He instantly recognized the name and brand of the handcuffs that the Dutch policemen were wearing. And the German cuffs are lips also. So he was like, will this key fit on the Dutch cuffs as well? So he went to this police officer and said, dear police officer, can I use this key in your cuffs? And he turned this back to him and said, shut up. Turned his back to him. So which clearly gave a vision of, a visual on the actual key that he was wearing. Well, I'm not saying I can make a key from that picture because it's a bit blurry. It's a bit less blurry here, but you get the idea. But I can see that this is a low, high, bit low and high. Yes? So if I compare that to the original key that we already have, I'm not saying it is the same key, but you can kind of guesstimate that it is, right? I mean, that is information leakage. Just wiggling about your key gives away quite a lot of intel. And a bit later he tried with another cop and he said, yes, show, go ahead and yes, it works. So this is actually the key to all Dutch and German police handcuffs. And the STL, you can download there and you can print your own. Because, well, information wants to be free, right? Different example. This is what I call a typical clean desk environment. It's a mess, but of course, me being a log geek, I see a key. We put a coin there just for scale because if we know the scale, we can do stuff with it. We guessed that this is a cabinet key because there's a cabinet underneath that desk that we kind of want to have opened and, well, it looks like a cheap key. Because it's cheap, we can assess that it probably has weird tolerances. So we don't have to be exact. So what we did, if you print out that picture you just took, if you take the picture that on, you don't have to, you don't have that distorted image anymore. So you just need to size it so that the coin is the correct size. And if you then print it out on paper and put that paper on a sheet of metal and just cut it out, go all kindergarten on it and stay within the perimeters and follow the lines, with some luck, you'll have a, at least you have a metal shape, a piece of metal with roughly the same shape of your key. If you put that in with some luck, that'll open. And that will open your cabinet. But it's fun. I mean, we could have just nicked the key, but this is way more fun. So this works on absolute crappy locks. And, well, talk about crappy locks. This is a TSA approved luggage lock. Actually, these are two locks. Well, this one says Samsung, it doesn't matter, I'm not bashing that brand. I just, well, talking about the idea. This lock, this number lock, that's yours. This one, the lock that actually takes the key, that's not yours. That's from the TSA. Because you do not have a key for this. This is a built-in backdoor for that lock. Because what happened after 9-11, well, especially after 9-11, the TSA, if you fly to or from the US, the TSA wants to look in your baggage. That's fair, I guess. But if, so if you lock your, your locks, if you lock your, your bags, they need to go through those locks. So they just start cutting the locks to get in that. People found that annoying. So some companies like Travel Sentry, this is their logo, they started making and selling locks that had built-in backdoors. That was the feature. This is the backdoor. It's quite, quite certainly not hidden. It's just plainly backdoor. What happens, TSA has a couple of keys, 9 to be exact. This is the number 2, clearly stated on there. So the TSA official just takes out his number 2 key, opens it. That's it. And then there is a papal trail and so they can go through your locks, through your baggage without destroying the locks. Okay. So there are 9 keys that can open all the baggage. So if you have media reports with these kind of pictures, I fringe. It's like, come on guys. Well, I can't really see which number is which, but that's about it. And well, and then again, having an internal document from the TSA leaked on their external website, because, well, because rules and databases and God knows what, this is dead on quite easy. And of course, if you own that lock, you know what this distance is. So you know what the scale of that key is. And because of internet and information, of course, there are 3D printers. And these are the keys to all the baggage. And of course, those are not high security locks. They're not. Well, maybe except the number 6, that one is made by Abbas. That's actually quite hard to pick. The other ones are just stick anything in there and goes. These are actually printable again, downloadable there. You can print your own. That's what we do, share information. So, I mean, like I said, it's not a high security lock, but it gives, it tries to give the user some sense of security, which I think is false. It's a seal at best, because now I can unseal it and seal it. So I refuse to travel with these locks. I just use zip ties. I use weird zip ties that probably are not in possession of other people per se at that moment. And so I use a zip tie as a security seal, not as a deterrent, just a seal. And it works. And I put some spares in the box, and most of the time, TSA just reapplies one of the spares. And I know how many spares I had. So that works. That's my security posture. Another example, in April 2010, some media attention went to this idea. Apparently, you were able to buy a key, a master key, so not just the key, but the key, to all the New York subway stations for only $27. A master key means you can open all the locks. So not just travel for free. Now, shut it down, because you have access to technical cabinets, and God knows what. So exit doors, fire exit, all that kind of stuff, you can open it. And of course, it'd be in post-911, and it'd be in definitely, it'd be in New York. The idea was terrorists, if terrorists get hold of this key, this key, this key. So by stating that the evil guy shouldn't have this key, by that act, they're actually giving it to them. And just in case this picture is not high-res enough to actually read that key, they put it on a map with a known scale. So this is giving it away, because any locksmith should be able to cut a functional key from this. Okay, mind you, this is 2010 when this happened. I give a version of this lecture every now and then. And last month, no, two months ago I was in Los Angeles doing this, and one of the questions was, did they change these locks? And I was like, I don't know. I don't know. Because, well, one, I'm not in New York, so I can't test it. I don't have the key. Well, I do. But then again, rule number one, you can only pick locks that you own, or don't own these. Or you have explicit permission from the only to do. Well, that's probably out of the question, so I don't know. I have no way of telling you that. Until last month, when this media article appeared, the key is $8 all of a sudden. And of course, they had to print that freaking key full size, otherwise the message doesn't come across. So people just found that out. And actually, if you go to the online version of this article and twinkle a bit with the URL, you'll get this picture, which is 300 dpi, high res, definitely enough to actually do that. I mean, come on. You don't need this. It's not needed. Okay, more fail. If you try to skim cards, you can go off-ansion it and make plastic add-ons that look like the real deal and to hide all your electronics. That's quite a cool, but there are better ways to get in. Let's watch a tiny video. Let's bring in NBC Bay Area's investigative reporter Vicky Nguyen. Vicky seems unbelievable that one golden key, so to speak, unlocks so many pumps. We were surprised too, Raju, could call it the key to the kingdom. It is a remnant from back in the day when all gas station pumps were made with the same lock to make it easier for inspections and maintenance. Now those keys are just providing easy access to a very lucrative crime using new high-tech technology, and in the end, we are all on the hook. Hidden behind here, a new Bluetooth-enabled skimmer that can rip off your credit or debit information in seconds. This universal gas pump key is making it even easier for thieves to install these new skimmers. That's not a high-security lock. The bidding is quite obvious, but I also know now exactly which blank to order, because that's a Y11 USA, which costs about 50 cents, I don't know, if you buy them in bulk. But of course, having access to the box that your machinery is in, then you have a box, and you can all your key-loggers and crapware and you can all put inside the box. And it'll look like a real deal, because guess what? It's the real deal. It's the original box. Nobody's going to see that it's a fake, because it's not. The fake is inside. So having one key for all those gas pumps, well, it does make sense, because if you're the engineer who has to maintain all this stuff, you don't want to create of keys, you want one. So there is a, it's always a trade-off. And I think this is the wrong trade-off, because when it goes wrong, it goes wrong in a bad way. All right, suppose you live in New York. New York has a lot of buildings which are high, and suppose you're thinking about a fire in New York. When there's a fire in one of those high buildings, you definitely want the fire people to get into your building. One way of doing that is don't lock your door. It's not the best option, or give them the key. But then again, New York is a big place. So if you, if they would get the keys to those buildings, they would need a separate truck to... So that's, that's, that's annoying. And that goes for every truck, because you never know which truck is the first one there. So you have to double everything. So what they do is they have basically tiny, tiny boxes, tiny vaults that they put outside the building, and they put the key to the building they put in that lock, put in that tiny vault. And those vaults, those are key to like. And if you do it in a wise way, I put some extra alarms on there and some monitoring on that box, that at least all hell breaks loose when that is broken open. So there are a couple of keys, just a handful, that basically are the keys to the city, literally. Of course, these locks get cut by locksmith, and some locksmith, they get out of business because of old age or whatever. This is one. This is a locksmith who used to cut those keys. And he went out of business, and he had some leftover stock. Well, the apocalypse might be coming because a New York Post reporter just scooped a legitimate story. An intrepid reporter for America's third dumbest paper has found a slightly disturbing item for sale on eBay. Former locksmith, Dan and Ferraris, sold the undercover reporter a New York City Fireman's key ring for $150. The keys give the owner the ability to control elevators, circuit breakers, sub-wanderers and traffic lights all over the city. They essentially become the key maker from the matrix reloaded. Media being media, every now and then you see an original story, but both of it is a copy of a copy of a copy, basically like keys. So other media outlets start copying the same story. So this is, you know, having to post actually dead on pictures, white background. Useful. Definitely other media outlets actually copy that again, and they added information. This one actually, these are the keys to the electric fennel. Fire elevator key also works in, where is it? Hold on a second. Well, high-rise, important stuff. Scary stuff. That's the fireman's service key. That's the subway key. We were talking about traffic lights. Turn all the lights red. Turn all the lights green. Remember Hacker, the movie? Brilliant. Fire alarm box sets. So sound the alarm or don't sound the alarm when you actually fire. So, yeah, I mean, you can create chaos with this key set. And of course, the only way to get the story across is by giving out those keys for free, apparently. Okay, this is the logo from a hacker space in Amsterdam called Techink. And, well, I'm a member of Hack 42, which is a different hacker space, also in the Netherlands. And being a, it's not a rivalry, a hacker space. It's a bit more like a sibling. And what do siblings do? They quarrel, right? All in fair and love and stuff like that. Techink got a new space at some point. And of course, with a new space comes a new lock and comes new keys. They were very proud of their new space, which is awesome, absolutely. And so one of our members, which is a regular troll, started calling one of these guys. He's like, are you sure that's a okay lock you have on your new facility? Because we have a member in our team that knows his locks. So if you can just send a picture, then we can assess if it's a decent key or not. And this showed up at the mailbox. Well, it's not a very clear picture, but if I do this... Yeah... I basically end up with that, right? I mean, if it was a high-powered laser, it actually would still be there. Well, this is a vague outline of what the key should look like, ish. So we hooked that up on our newest machine we happen to have, which happened to be a laser cutter. Good thing it wasn't a coffee maker, because that would have been a mess. And what you see happening is it just burns that outline out of a piece of plastic. That piece of plastic is, I don't know, a whole mill thick, so that will never fit, or at least it should never fit, in your keyhole. So, well, actually, that was the label. Now it's the actual burning. We do have some ventilation now. We just got that machine, so we do not want to inhale those fumes. So you end up with a basic outline of the key, which does not fit your keyhole, but it is a basic outline. So what you can do with that... Well, let's wait for the actual burning to stop. Burn, burn, burn, burn. Focus, focus, focus. There we are. See? Basic outline. Picture. There you go. You see, we set it a bit too hard-burned as a shadow, but this is the actual shape. The key, we kind of assessed what the blank was by just by looking at it. So, hooking this piece of plastic into a normal key cutter. Here's our bit of see-through plastic, and that's a normal blank. And you can just do it the way you're used to. And, well, did it work or not? As I was not a member at that time, I couldn't assess that, because rule number one, it's not my luck, no permission. But being a hackerspace, which is all information, wants to be free, we put this whole video and all the specs on our public wiki, of course. Well, that's what you do, right? Well, some people freaked out. It's all for play, so the day after we showed up with a decent luck, and 80 copies of the keys so they can redistribute that. And then they said, so now we have a good luck. You can't get into a door again. I said, who supplied you that luck? Off-hooking game. I became a member and they even gave me a discount, because, well, you didn't have to give me a key anymore, did they? So that was good. Cool. Speedcams. Fun, right? Let's do the video first. Keys and a cylinder lock. According to the mail, this is in the flyer. And I can open the flyer with these keys. And how much did it cost? It cost 14 euros and 23 cents. If this works for 14 euros and 23 cents, that's also a lot cheaper than a flyer's boots. This is the one. This one. There have been a lot of actions in Flanders to change this flyer. We shot it, we burned it, we even stuck it. But actually, according to our mail writer, all of this is not needed. Because the secret of this flyer is in this box. This robust box is the remote control, the electronic power of that flyer. This key, that you can make this combination with it, and then you open it. The key fits. The door opens. Here you can just put it on and off automatically. Let's close this. Finally, completely new. This is very cool. This is cool. Did not change that combo. That's leaving default password as default password. That's exactly the same as they did. That's weird. But I didn't want to buy those locks with those keys. I just want to assess if I can make a working key from the stills we got from that moving video. We got this one. I can get some info, but not all that I need. We got another one. That's doable. I think this is the money shot. It's kind of straight on. It kind of gives the shadows a nice, so we can see the depth. If you know your locks, this is a quite peculiar shaped head. If you know your lock, you see that's a BKS. That's a BKS head, because that's the only one I know that has this head. Again, a BKS has the standard size, so we can see what the rest of the key should look like. Actually, this is basically the style sheet for a BKS. The head should be here. These are where the positions are. These are the possible depths. If you keep looking to this picture and compare to that, you can kind of guesstimate what the actual depth should be. We came up with this. We guessed and assessed, basically. It's a guesstimate. We think this is the lock to all the Belgium speedcamps. Of course, rule number one, can't pick it, because I don't own it, or I have explicit permission from the owner to actually try that. Well, that's probably how to the question, right? So what to do? So we bought a speedcam, right? And yeah, it works. This is actually the key to all the Belgium speedcamps, so maybe we shouldn't show it. Well, so we're talking about pictures of keys, that that's a bad thing. But actually having hold of that key, even for a split second, that's the golden ticket, right? So now you can do stuff. Of course, you can run to the coffee shop and have it cut. But there are ways to protect from that. I mean, one way is if you look at your keyway, and there's a certain swerve in your keyway, if you get all intricate with that, so make weird swerves and you pattern those forms, some other people are not allowed to sell keys that look the same. And if you use a very weird shape and only use that for your installation, it's going to be harder to get fitting pieces of metal who go into the lock. Normally, if you play nice. But then there are boxes like this. This is a key cutter, but it works in another direction, basically. This is an easy entry machine, it costs a couple of thousand dollars, well, euros or whatever, and which is some money, but it depends how determined your attacker is. For a sports lock picker, that's an investment, which we made because we want to have that box. It's a cool machine. It takes slugs of brass, look like this, smiley face is always good. What it does, it cuts keys. But a normal key cutter would go in that direction and duplicate that. This one takes those grooves and duplicates those. There's a zoom. You see, it just mills out these grooves. And so you end up with a blank key that you have to put on another key-covering machine that does the sides. And then you open and then you end up with a working key. It's fun. It's a very expensive way of duplicating keys, but if you have a key that's hard to duplicate or buy a normal service because you can't get the blanks, this is a way to go. It's a fun machine. Kind of cool. Tin soldiers. How are tin soldiers made? Anyone? That's the material. How are they made? They're cast. And they're melted, of course, because that's part of the casting. They're cast. So if you can make a cast of a metal, if you can make a cast of your key, which we'll hold it, something's out in here. Still here? Yep. If you can make a cast of your key, you can pour metal in there, right? If you have to go crack stuff. And you can go quite fancy with that. So we tried a very cheap set, comparably. This is just regular clay. We went to the hobby shop and bought all the clay, basically, to figure out what the correct clay is. It takes some research to figure out what it is, because if it's too solid, then it won't take an imprint. And if it's too soft, it'll smudge it a bit. But, well, if you're buying clay, that's not an investment. Come on. And, well, there's a piece of metal that we haven't figured out, actually, what it actually is, because it has a very, very low melting point. It's not lead, because that would be unhealthy. But it's, I don't know, whatever. And we just tried it with a normal, quite good key, not a high security one, but a quite normal average key on one of our friends on his kitchen table. We just tried it there, and it worked first time, first time with Charm. And it's brilliant. It even works with that. And if you go a bit more high secure, then that clay probably won't do. So you need a material that basically has a higher resolution than normal clay. What we found out is that your dentist used a two-component putty, basically, to take impressionings for your teeth. That's awesome stuff. It doesn't melt or shrink when it's heated or cooled down. So that's, well, because that would be weird, because then your key would be different size. So that's good stuff. It's quite expensive, though. So befriend a dentist is always a good thing. You end up with stuff like this. That's the original key. And you can imagine, I mean, this is not a high security key, but it's a different key. Actually, the talk after my talk will be about keys like this. On a normal key duplication machine, the one that normally cuts like this, this is going to be impossible to do with that. It's just a different type of machine. I'm not saying it's a better lock, but it's different. And so a normal key cutter won't be able to do this. But of course, you can pour that. And if you take it out of the mold carefully without destroying that mold, then you can retry, because the mold is still intact. You can just melt this one and redo it. And then you're golden. So you can keep on doing it over and over. This actually works. We tried this. OK, another one. This is the ID of Rob Grongeijp. You probably have no idea who the fuck Rob Grongeijp is. But if you were Dutch, you would have known. Rob is the most well-known hacker of the Netherlands. He's basically one of the driving forces behind that we're not using computers to vote for parliament these days, which is Jaffer Rob. And he founded our first commercial, well, public ISP. So he's one of the good guys. And he's a hacker. So he knows his operational security, you would think. These guys knows risks, knows risk profiles. He should know better than show keys in public. Let's see what that goes. And he wants to make sure that there are all sorts of interesting things under this station. Shards, bunkers, extremely exciting. Especially at that time, it's really great. I'm going to go in. We're standing in the lift. I'm not going to push the button. I can turn the switch on. It's going to be very close again. No, we're not going to the tree. We're going to be lucky that the light is on. We're just going to keep going. I'm going to go deeper and deeper. The room is really light. Great. Oh, I lost it. And then you have hackers in your super secret nuclear facility. What's wrong with this picture? It's a bit out of focus. It's not that on. There's no chance in hell that this is the actual key. Because this is a double-bitted key. This is a cabinet lock. This is just a key he happened to have in his pocket. And he showed it to the camera. You didn't know that, did you? It took nothing away from the story to lie. So this guy has excellent operational security. The journalist is happy because he has his footage. The audience is happy because the attention is still there. You didn't know it was a fake. It's just a prop. Security is not compromised. The story is still there. He's not showing keys. Not showing keys is a good idea. Other ways of not showing keys is this is a key pass. No, a key port is the term. It's actually basically a box that you slip your keys in and out of. This one is branded Defcon 22 because it's a fail. And that's what I like. I like fails. What it is, it is a box. It's a neat idea. So you can put USB things in it, but it's about the keys that you just slid in there. So it's basically an intricate way of keeping your keys in your pocket. So you only slide them out when you use them so nobody can see them if they're looking. And so what they do, they don't cut your keys, but they send you the blanks with this adapter basically. So they need to figure out what your actual keys are. So of course, what do they need? They need a picture of your key. So you're selling it to Defcon 22 attendees who are paranoid, rightfully so, and you're selling it to the upper paranoid there because they know they don't have to show their keys, and you're asking them to send pictures of their keys. That's wrong. Well, for granted, in the small print it says for extra added security, you should maybe blur or blind the actual teeth on your lock. But that should not be added security. Come on, guys, you're selling a security product. That should be the default. That should be the only way you can upload pictures of your keys. Come on, get out of here. So we did figure out that this is probably not the best idea of the world, right? I mean, Post-it should not be on consoles. There should not be passwords on Post-its. That's not going to happen. So this is wrong, right? Well, this is a website called pleasebreak.in. What it does is basically a Twitter feed grabber, and every time you tweet a picture and say, Newhouse and keys in that tweet, it gets added to this site. So we have, and of course these are all click-through, so if you click on one of these pictures, you will get the key, the house, which is, and that picture is probably geotagged, right? And if it's not, I'm pretty sure someone can write a Google Street View add-on that can grab pictures and compare to that. We only have to look at Turlock, which I have no idea how big that city is, but this is doable, and you have access to this guy's Twitter feed, so you probably know when he's on holiday, and he just had a new house, so it's probably filled to the brim with new, shiny toys. Come on, this is given away, don't do this, this is wrong. So, like I said, showing pictures of keys is wrong, but then taking pictures of keys or having pictures taken while you're using that key, it gets harder and harder. If you happen to live in London, you're surrounded by a gazillion CCTV camps that basically watch you every time, right? And of course there's Google Glass, well, we don't see it that often now, but something like that will evolve, so we're already looking at contact lenses, so we haven't got the clue if somebody's actually looking at you or not, or taping or recording it, and we all have pretty capable cameras in our pocket right now. Well, those tracking devices are called cameras, our core telephones, but they're cameras as well, right? And they're high-res enough to do this stuff. So, when I was younger, and we stopped using cash money, basically, and there was a TV series that basically showed us if you're pinching in your PIN code for your plastic money, shield your numbers. A PIN code. A key is basically a code to that lock, so, I mean, don't show it. You saw Rob doing it in that video when he actually was operating that elevator, you didn't see that key in the clear. Just slide it a bit. Okay, I got one more fail and they were done, so that I don't want to keep you away from the drinks. This is a bad guy. This is actually a killer. I know that because he got convicted for killing people. He went to jail for killing people, and he walked out of jail because the design of the master key was printed on front of the prisoner's information handbook. A copy of the upset book was given to all inmates upon arrival of that facility. So, it's, I don't know, so security can be hard, and security can be easy, but for some weird reason it's always quite easy to fuck it up. So, that's me. Any questions? That's me. I'm done.
A password shouldn’t be on a post-it note. In plain view. On the console. The password to a locked door is called a key. So if a reporter wants to get the point across that certain people shouldn't have access to a particular key, would it be wise for said reporter to show that key to the world? This talk show how not to run this story, why we should care and maybe make you rethink your physical security a bit.
10.5446/18838 (DOI)
Now I could ask you how was the part last night but I can see the casualties so I Could be gentle with you and make this presentation light and entertaining for you But I'm not a nice person so I will make it in an easier So let's just try jump right into this What am I going to do? Who are these incomparables in the landscape where we are moving? I'm meaning a Malware outlaws malware group writing groups Anti malware protection programs special a PT protection programs Every major constituent is thoroughly tested and their qualities are measured The end of virus protection Products are regularly tested by third party testers Even they this special a PT protection Devices which otherwise claimed themselves to be untestable are and Can be tested you could hear about it from yesterday presentation by Zoltan Balash or later today by Bouldie A little more detail about how to test these a PT defenses Even the test themselves are measured against the objective criteria by the anti malware testing standards organization There is one single player who is never tested and those are the malware outlaws and that's not fair We should be aware of their capabilities Not only for fun but but also because Actually, there is an actual war going on between them and us and the first Rule of war is that you have to know your enemy If you don't know your enemy your defenses will be inadequate if you underestimate them They are going to get you if you overestimate their capabilities then Your efforts in protection will be misplaced and you will waste your efforts in areas when you shouldn't waste and you should Concentrate your efforts elsewhere just as an example if you have a house which is full with valuable stuff and gadgets and You want to protect it and you're afraid of the burglars in the neighborhood There are a couple of options you can choose you can build a ball wall around your house a three meter high wall Electrical electric fence on the top that would effectively stop the ninjas who have as we all know only 2.5 meter vertical leap But that's a bit expensive It also blocks your view for the outside. It has also devastating effects on the vegetation around the wall so if you would happen to know that That the typical burglars in your neighborhood are Just cat burglars who Really good at lock picking because yesterday they attended the lock picking workshop here at activity Then you would know that the wall is useless. It's not necessary You should strengthen your locks and windows and doors that would be an adequate measure for you much cheaper And it would be still protected now it would still wouldn't defend you against Nation-state sponsored Attackers like NSA and the likes but chances are before NSA would attack you there will be about Five Russian cyber crime groups three Chinese APT groups. Maybe one israelian France APT group attacking you So you have to prepare for the vast majority of the attacks for that You have to know the capabilities of the attackers and that's the point of evaluating All these malware outdoors the APT groups and the common cyber crime groups and that's the point of my presentation now How do you evaluate these groups? How do you measure the skill set of them? There are a couple of Problems with testing them the first of all this subject These Malware outdoor groups work on different principles. They have different purposes. Some of them want your Banking access information so that they could steal your money from the bank others want some sensitive Documents from your hard drives while others want to just destroy your nuclear facilities physically So they have different purposes. They have different targets some are targeting home users others are large corporations Yet other APT groups are targeting non-governmental organizations. So the target range is also wide and Because of that they have to defeat different defenses for home users There is only probably a free anti-virus solution for for large corporate users all sorts of in depth deep and defenses in place even some advanced protection devices and for that the the attackers use Very different approaches and tools some are very happy just sending a Fishing email with the tax. Hey, here is some nice contact Click here and you'll be fine Others are using common exploits yet others are using zero day exploits So there is a wide range of tools that they are using so The task is how to measure and and qualify players who work on a very wide range of activities and The solution is something like Professors are doing in In university classes where they have a wide they have a lot of students With a wide range of capabilities They are going to give them a problem to solve and based on the level of their understanding of the problem and and the other skill they show in the solution They are going to rate these students. That's what I'm going to do. I'm impersonating a teacher APT groups are going to be the students now for that test to work The problem has to be solvable if it is not solvable. It's not there is no point in the test It also has to be difficult enough. So if anyone scores Perfect on the test then it is not a good comparative test Also, the test problem has to be granular enough to differentiate between wide range of skill sets and lastly Every student in the classroom has to be motivated to solve the problem if it is a problem that Only 10% of them is interested in solving than the test results will not be Usable for our purpose and measuring a large number of these Malware out of the loops. So what is going to be the test problem? The test problem is going to be a Word vulnerability it's a It was discovered last year. It's a rich text format five format vulnerability That leads to a Memory corruption now if you attended yesterday, so what I'm at presentation a Lot of the the terms and methods I'm going to talk about should be familiar for you. That was a Very extensive and good overview of the general principles and this is the this is going to be a practical implementation and this is What I refer to in the introduction that I'm going to be tough with you Because in order to understand the results of the test you have to understand the methodology of the test and The methodology of the test relies on you understanding how this exploitation works now this vulnerability is Is the has the unsexy name of CV? 2014 1761 I'm going to refer to this one of it. Yes 1761. I estimated it to save me 35 seconds of presentation time overall anyhow, this is a new world one ability and exploit and Every possible malware author group is just very happy to get their hands about a new world one Ability and exploit so they are very much what motivated in using it Because it is a powerful tool in infecting users So if you read the original Microsoft description of this exploit Of this vulnerability, it will say that it affects all possible Wordware Word versions that were out there at the time so All of them which are listed on this on this list are Vulnerable and possibly Exploitable by this vulnerability now, we all know that in theory there is no difference between theory and practice However in practice there is a huge difference. So If you would guess what would be your guess how many of these World versions were actually affected by this vulnerability? The silence I take it as zero it was a slightly more than that Actually one version was ever affected by this vulnerability and it was of his 2012 respect to 30-bit version and the reason is that Even though all the other versions were exploitable and could have been exploited successfully the practical Implementation of the exploit have relied on absolute memory of set taken from a particular Windows components MS-com CTL-O CX a particular version of it and And that one was only by default installed by this office 2010 Service pack to it would have been a straightforward process to To pour this vulnerability to all of the other office versions. It didn't happen Why you will probably understand around the middle of this presentation So let's talk about a little bit about The exploitation process itself From a very rough overview there is a Rich tech format exploited documents vulnerability trigger a shell code gets Executed and at the end a payload is dropped into the system by some sort of trojan now There is one thing a slight problem in this chain and that slight problem is called data execution prevention Which means that it is relatively easy to to full world into into writing shell code into a memory area On the heap It is quite easy also to convince to jump into that memory area However, what is difficult as is now and not possible with depth in present is to actually execute that code because These data areas either on stack on heap are declared at the end of the year On heap are declared at least on the contemporary windows operating system as non executable You can inject your code there. You cannot execute that So before you can execute your shell code that would drop and execute the the final trojan on the system You will have to make sure that the shell code is executable. It's it's another executable page So the whole exploitation start with a bootloader Component that bootloader component allocates a new memory block Make that an executable memory block just copies the shell code there and executes from there. That's pretty easy There is one slight problem in order to Make this allocation you have to execute the bootloader code but how do you execute a code if you cannot execute a code because of data execution prevention and Here comes the concept of the rob exploitation return oriented programming Which means that you cannot execute the code that you placed in memory But you can execute codes that are already placed in the memory by the system libraries at the time the exploitation occurs There are about a dozen soft window system libraries already loaded into the memory for your convenience this means there is there are Tense of megabytes of code Laying in memory that you can use now all you all that the attacker has to do is to pick small snippets of these codes Think about them as puzzles get these puzzles from Windows system libraries and just Chain them together so that they would accomplish the functionality that you will need so during the exploitation When the memory corruption occurs you just divert the normal execution of work to jump to the first puzzle of your code and it will Get the word gets so disoriented that it will jump from puzzle to puzzle if they are they are chained together very carefully then They will accomplish the task you want these Puzzles are a very limited Capability sets so you will need a lot of puzzles to accomplish even the Smallest task that you need There is another one The problem which is a SLR the address space layout randomization Which means that if you want to use these puzzles you have to know where they are in memory 99% of the windows Libraries are placed randomly in the memory. There are only a few of them which have fixed load offset and These libraries are of extreme value for the the exploiters and attackers and MS com city Allowsing that is used by the moon albity is one of those libraries so the exploitation starts with Confusing word taking making him a detour in execution divert him and then starts the drop chain Which would allocate a memory for an executable memory for the shell code and execute it There is another slight problem in this particular case which means the which in which which means that Then first word first gets dire and diverted it gets diverted to a small memory region And the small memory region region cannot host an entire rope chain There is another memory region that the attackers can control which can host a large group of data But it is not where word first gets The diverted to so the bootloader of this exploitation process is Further divided into two parts. There is a bootloader of the bootloader The initial rope chain which will make sure that the execution gets diverted to that larger Buffer which already hosting the entire rope chain that the rope chain then allocates an executable memory range Copies the shell code the shell code executes in one or two stages Locates the payload decrypts it drops it executes it and Detector is there with a video with the install Trojan now It is a good test problem for the malware authors because it is granular Modifying the final payload It is a everyday task for these malware authors What if I in the shell code? It's not every day, but it is they do it on a regular basis touching the rope chains now that That's a highly skilled operation and not many of them. That's do you to do that? So here we have we have a granular problem for the malware authors to solve if we are looking a bit in detail into the exploit itself it is a memory corruption a vulnerability V table a pointer to a function table gets over with them during the exploitation The way it happens The article documents can contain list override tables which have several different parameters for lists embedded in a text of the document now the the data in this list of a right tables is stored in buffers in memory and the Instructures in memory and the the addresses of these structures are stored in a preallocated memory region now if there would happen to be a somewhat more of these Overwrite structures than word expects then it will Stretch is over the boundary of the preallocated memory area and it will overwrite whatever comes after that And that's the memory corruption at least today the execution in particular at a certain Memory address there is a pointer to a VTB in MSO DLR It's a pretty meaningless that is for me a function table not to fancy functions and in the process of parsing the Malfold rtf document The addresses of the list override tables stretch over the Allocated region and overwrite this this function table address So at some unrelated point later in the code execution of word A call would be made to into this function table But instead of taking the appropriate function from MSO DLR This call would take the the address from from one of the list override tables and And this address would be an absolute memory location in inside the MS com CTL or CX Yes, and The initial rope chain as I said Does nothing but transfers the execution to a larger buffer which is controlled by the attacker. I mean there is a Inside the list override structure. There is this level text buffer which can hold a large chunk of binary data This large chunk is going to be the main rope chain and the share code first stage share code now The list of the right table Contains the address of this buffer. So really the initial rope chain has nothing to do but execute this single call into the The list override into the level text buffer like I said the rope Drop gadgets or the puzzles that you can use from From the preloaded system libraries are of very limited capabilities. So this single Single call single assembly instruction you need six different puzzles to to execute this code and Just as an illustration of the complexity of the task the first address the address of the first rope gadget is stored in a list override table, but within the RTF RTF file it's actually actually stored in four different places. It combined from four different places For example, the first byte E8 is the value of the the level of NFNC and tag in the RTF 232 decimal equals to E8 in hexadecimal. So the the first byte 48 It's actually a bit filled and several of the Texts within the RTF combining to it for example this level GCN 0 and Level Norris start and level world and this level Norris start is Setting 40 the other setting eight in this bit field. That's how it is combining together Finally the last two bytes are in the level numbers tag within the RTF file this Slash up or stroke 5a Note the the hexadecimal value of 5a the following apostrophe is the the eski character of 27 which is follows that so in order to control one single address In in the the rope chain you would have to modify at least at four different places for different distinct places at the RTF file To use this exploitation you can imagine it requires an intimate knowledge of the RTF structure and representation so they in all effect and This address in MSCom city row six you will find a small call Court fragment this will be is going to be the first puzzle in solving the the code transfer After that execution goes on to the larger level text buffer the Rear rope chain that is stored there which was the memory allocation now that one is a bit longer and As the expo that I mean The the attackers have no absolute control over the code within the rope gadgets So they do what they want and they do a bit more than that in some cases Apart from doing whatever the attackers want to do they Perform like some pop from the set stack which is not needed for the actual execution But because there is a pop and it's not avoidable There has to be something on the stack that is popped into register which is that only they're never used So during the rope chain there are a few unused bytes Which has no Significance for the exploitation they have to be there so that something could be meaninglessly popped into a register So these bytes are not used it could be anything right there the their main rope chain The main logic is very simple it allocates a new memory copies the shark code there and jumps there But because the drug gadgets are of limited capability it requires about 10 12 building blocks to accomplish this task if we look from the the rtf perspective into this The exploit the rtf file starts with some sort of header followed by some sort of irrelevant information The exploit trigger the initial rope chain is scattered throughout the the texture of the rtf file The first stage drop chain is stored in the in the level text buffer Along with the first stage shall code second stage shall code and the payload is usually Appended as some binary chunk at the end of the rtf file now from the Attacker point I mean from the test point of view it all have it adds an additional granularity to the test like Every decent malware out or could modify the appended binary shell code and the payload. That's not a problem the first stage shall code is easily recognizable in Relatively easily recognizable in the level text buffer. I mean it looks like a buffer of bytes that a decent malware writer can Comprehend and and modify the initial rope chain For that you have to really really deeply understand the rtf structure so apart from the the the granularity in the exploitation itself there is a granularity in understanding the rtf structure itself, so it is a Well-defined and granular test task for the for the attackers and we are going to leave it so We are going to Rank the attackers on the lab on the skills that they are showing to us so starting from from Zero knowledge data knowledge means that they are buying on in the underground market base is a generator and they are going to generate a sample with it Basic skillset involve replacing the payload in an existing sample intermediate knowledge attackers can modify already the shell code Skilled ones try to make some trivial modification in the rope chain itself advanced and protesters can make significant modifications in the rope chain or the exploit trigger and And the real good ones can control every single aspect of the exploitation that is going to be the scale that I will place the APT authors so the the first version of this research was published in February At our blog I'm not going to touch all of the families and the groups mentioned in that Because that would be an even longer presentation You can go there and check that but I'm going to mention a few additional ones because we which were not known at the time of Writing that paper. So let's start with suspect zero the first ever sample that we could identify using this as well. This is going to be the base point of Of comparison as it turned out All of the further samples were derived from this one There was no independent development going on in this exploit. This was a destructive trojan appeared last April and it displayed a decoy with some Mayor partners seeking advertisement Clearly Because of this kind of decoy and a destructive payload It was not used in the targeted attack as you would expect from from an APT player who Would deploy zero day. I think it was deliberately released A little before Microsoft Patched this vulnerability The reason is unknown perhaps the Perhaps to cover tracks because if there is only one single entity who knows and uses this vulnerability every every Evidence points into the bad direction. It's other starts to using it. It will get scattered. Anyway in this case the document starts with a large chunk several kilobytes of Really junk contents context not used not displayed But it is very convenient for identifying everyone else who was copying this content Clearly this guy whether developed it was a highly skilled one There was some early birds One week after this the initial sample was released a couple of targeted attacks were Performed using this vulnerability Mainly by the Duke group That was recently blocked by by f secure a great overview by them It was Targeted against diplomatic targets and they they made Very significant modifications to the exploited document. For example, they cut all of the junk that was at the beginning of the file They stripped down nearly to the minimum to the the RTF contact and Also changed memory location within the rope chain. So they made Very significant changes in the the exploited documents. I'm not saying it is not possible to do all that in a week I'm sorry, but it's very unlikely that happened So my guess would be that they had prior or prior knowledge of this exploit Before they started working it if I had to guess I would say that This is the group that is most closely connected to the source of the exploit Now the the Dukes have a reputation of being supported by the Russian government They have huge financial resources and they have a history of using zero-day exploit So it is not an unreasonable assumption that they were the first one to use this exploit, but there is no Strict evidence pointing into that direction. Anyhow because of the changes they made to the rope chain clearly dangerous and probe criminals There was some direct descendants That were using the original sample and they didn't do anything else But swept the exploit step the payload the binary Trojan at the end that's clearly a very basic Modification this samples appeared about one week One month after the the original really really is and they are they were used by the petit tiger a PT group When I said they didn't change anything that's not entirely true They changed the the author name from is my is my to is my which is something you could do in two seconds in a text editor so that doesn't constitute as a Major skill anyway, but this is this group showed the very basic skills of exploit ability And then comes an interesting strain which is metasploit and and the direct descendants from it Metasploit is a great tool for researchers for for penetration testing understanding the The exploit and so it is also a great tool for Malware authors and they are Using it extensively to generate new samples for it like In the cases I'm going to show you the the metasploit module appeared about a week after the original release of the first document and The clearly the web were created that module understood some of the the Europe exploitation at least to the level that as I mentioned in the Main rope chain. There are some unused filler bytes which are there only for that to be popped into a meaningful meaningless Register now these were filled in the original sample with the 41 hex bytes now in the metasploit module They were filled with random values. These are these these whites that could make us Possible to identify whoever was ripping metasploit for samples. Anyway, whoever developed metasploit module was a skilled exploiter one of the direct Descendants from Metasploit was the The havex malware which was mentioned also mentioned yesterday in presentation as targeting energetic sector Looking for industrial control systems, but when I created these slides, I didn't know that it was going to happen So I picked another one and another example and that was the inception Group it was reported by blue coat and later on by Kaspersky under the name cloud us Atlas And they directly connected that with the famous red October campaign anyhow, but they did it they generated a Sampled by metasploit then swept the shellcode and the payload and they just replaced it Plus additionally in case of inception they prepended another exploit block at the beginning exploiting an older vulnerability However doing that They messed up with our TF cell structure. This is a very delicate Vulnerability if you mess up with the rtf structure, it will break the exploit and that happened in the case of the inception group they Generated about 13 documents With this exploit and sit in 11 of them the exploit was actually broken. So they generated the Asample with Metasploit just to use this vulnerability and they broke it in about 90% of the cases that's really on one hand They are skilled because they touched the shellcode and the payload on other hand It is shadowed a little by the fact that whatever they created was not working anyhow, there is a huge Group of samples using this exploit that we're using some sort of generator sample generator One could argue that Metasploit is also a generator, but now here I'm talking about commercial tools released in the underground Circles one of them I don't know what the generator is. I don't know the name for it. It has not been Reported yet. We just see that hundreds of samples are generated by it a lot of Common banking trojan families are being distributed it and it is right now dominating the the Exploitation scene so the largest chunk of exploited documents that we are see right now on a databases are generated by this tool It has Apart from the the the main block main level text block that was the rope chain and the shellcode an additional two additional blocks that Have the same filler value all over them it is not used it is pointless But it can use it can be used as a watermark to point out the all these samples that were created by this toolkit anyhow Because they did touch a little bit of the The level text structures and the exploitation stuff is sort of an intermediate skillset was whoever was Writing this this generator another one Microsoft 13 through there It was blocked by Fire I and later this year just a few weeks ago We have released a white paper about this one. That's the other large chunk of Exploited samples that using this vulnerability also hundreds of documents created with it I Suggest to read our wallpaper because that's very interesting and I don't have The time to go into details into it anyway, it has a very distinctive characteristics and it is One of the very very few cases when the the Malware authors actually touched the rope chain and They built an alternative rope chain instead of the original one so it is performing the same task It it requires two more Building blocks it is a bit alternative route But documents generated by this toolkit Exploit three different vulnerable it is within the same RTF file and also dozens of mostly banking trojans were distributed by Document generated by this toolkit anyhow the level Of skills that's that the author of this kit showed because he actually they have to touch the rope chain It's really a someone who understands exploitation at a high level a very interesting case Was the Rotten tomato case now? I I'm a physicist by education. I'm a lousy programmer. I When I have to do programming I do as a physicist. I take an example program modify to my needs and beat it with a stick until it works It was surprising to see that an APT group Chinese APT group followed just about the same Path of development except for the until it work parts So what they did They wanted to use this exploit in their campaigns So they took a sample generated by word intruder which was mentioned just one slide before and the There was a third exploit block at the end. They just got rid of it perhaps it was too complicated to modify that in place for them They replaced the first exploit ball block Appended their own payload and started to use it in a campaign now the problem with this picture it exploit to vulnerable it is If the first one is activated then the Trojan the APT Trojan by this group Some other plug-ins backdoor is get a gets executed if the first one is triggered Then the the original Z bot sample from the original sample that they ripped from word intruder gets executed So depending on the condition. It's either an APT or a common cybercrime It is really an unwanted situation for this group mostly because they Grab the sample so that they could use actually the exploit so What they took they grabbed another sample where this 761 exploit worked and They cut out the original one for the from word intruder copied into it from that other example And there it goes one problem like I mentioned until it works part Word intruder has a slight problem at least In half of the samples the exploit doesn't work So this Chinese APT group didn't get lucky they picked the sample where it the exploit didn't work actually But when they copied the block from this another example They overwrite they did overwrite the non-working exploit with a working one and They broke it immediately because In this case this shellcode looks for the payload at the fixed file offset Because and when they copied it into their own document There was this unused encrypted Zbot executable at the beginning another exploit block so this fixed file offset was shifted back into the that file But this Chinese authors never corrected the shellcode for this offset so And they created samples and they used it in target error text where this exploit never ever actually worked and they were Using it for months in different target error text. They started it using in Russia distributing plug-ins and Then they moved their operation against India and Pakistani targets Anyhow they show a really basic set of exploitation understanding and I'm Being very generous to them with this classification. Anyhow, there was one case of successful integration I don't think it was used by the same group it was deployed in in Arabic countries When they actually fixed the shellcode offset and it actually dropped a Zbot anyhow Let's switch to the the evolution part so In this table I just blindly copied all the the malware families that I have seen created I have seen using this exploit and just by looking at the samples. I was placing them Into that skillset matrix. However, here comes the fun part. It's a university professor. You can do nasty things One of the nasty things is to see if they actually work Like I said In many cases in fact in the case of this exploit more over Over half of the cases the exploit actually didn't work. I mentioned it with the inception I mentioned it with birding through there the Generated or used samples they just contained broken versions of this exploit and the supposedly highly skilled Cyber criminals they just fail to realize this fact. So that just takes back a little value from their evaluation and The other thing is the relations really One of my university professors had this really bad habit after test. He started to create Disindancy graphs who was copying from whom and And modified the marks accordingly. So I'm doing the same with these malibu routers For example a large chunk of samples or sloucher of cyber crime groups Although the samples they are using show Large are great understanding of the exploit. That's not they merit It's because they are using the birding through there or some other generation generator. So they The The merit goes to whoever created those tools. They are the actual users of the exploited documents The skill set extends only to the point of executing a Generator and using whatever is the spit out of it so In this picture you can identify a couple of high profile APT groups For example the pity tiger or a genetic energetic beer group, which I mentioned Number panda or nightshade panda they were responsible for the rotten tomato cases Hangover team also showed their mark in this in this table karma panda and the dukes which I mentioned earlier So really this is the evaluation part of the test and this places all the Approval all the groups in their appropriate places. So the here is this dividing line anyone left on the in the table understands Whatever happens after the exploit happened so they can Deploy their malware they can modify the payloads, but they really don't understand the exploitation itself They don't have in-house expertise in exploitation Whoever are on the right side in this table. Those are the really dangerous players. Those are the ones who understand exploit exploitations Five formats and they apparently have in-house expertise. Now what this table doesn't show are the numbers the vast majority of the incidents that we can see 99.9 point whatever percent belong in this region where Used by players who show little to non understanding of the exploitation and only a few Incidents belong to the real dangerous guys and even these are shadowed by the fact that Even though in solving the problem they show high High skills Whatever they created was not working. So they may be good programmers They may seem to be good at exploitation by but they clearly leak the key like the capabilities of Determining that whatever they created was actually working or not so the the conclusion is that Malware outlaws in general even the the highest profile APT or cybercrime groups are clearly lacking in in QA They don't they are not checking Whatever they are using in in actual attacks if They are working or if these are multi exploit samples that every individual exploit is working in the samples The common cybercrime group groups who are deploying banking trojans have a better supply chain Because someone is doing the generators for them and they are buying it But even though however skilled these groups are they don't show enough Knowledge and skills to port this vulnerability to other office versions. So There is a certain limit in their in their capabilities But they are very eager to use any new vulnerability that is available and Immediately as they can get their hands on it And they are going to use it in a text or try to use it in a text But a final warning for you even though they are not Really the ninjas you you should be afraid of just cat burglar's Once they get into your house, they show very high capabilities and skills in Emptying out your house and cleaning out or your assets So they may not be good at exploiting but once they get hand and they get their food into your organization They are very talented and resourceful. So be aware but You should know that if you keep up with the exploit information They are not really much ahead of you And that oh I have some final slide. I I said it was going to be a test So let's see if it is really a test and here are the the objective criteria by a MTS or not in my testing standard Organization, so I'm just going through some of the criteria. I don't think I'd end danger the public by this test I'm certainly not biased towards any of these groups. In fact, I'm equally biased against all of them I think the test was reasonably transparent the testing methodology I mean I spent the first 25 minutes explaining you the testing methodology. So I was pretty clear about that And Finally the test should have an active contact point which in that case should be me I guess So that concludes my presentation I Think As I look around I don't see too many people sleeping. So I guess I Reached my goal and kept you awake that take it as an accomplishment. Thank you. You
It is common belief that APT groups are masters of exploitation. If anyone, they should know everything about it, right? Our research into the real world uses of the CVE-2014-1761 vulnerability shows that it is far from being true. It is a common practice in the anti-malware world that the security products are compared to each other in comparative tests. Even the tests themselves can be evaluated by the criteria of the Anti-Malware Testing Standards Organization. The only players, who are not rated, are the malware authors. This is for a good reason: their activities cover a wide range of operations, that don’t fully match and can’t be exactly measured. The deep analysis of the samples using the CVE-2014-1761 vulnerability gave us a rare opportunity to compare the skills of a few different malware author groups. This is not a full and comprehensive test, but given the complexity of the exploit we could estimate the skills only in a very narrow slice of the full set: the understanding of the exploit. But the situation is the same as with any other test: if you know exactly what you are measuring, you can make valid conclusions. The presentation will detail the exploitation process, explaining the role and implementation of the RTF elements used in the process, the ROP chain and the shellcodes. We will investigate the different malware families that were using this vulnerability, and discuss the depth of modification into the exploit. This will give us a chance to rate the understanding and exploiting skill of the authors behind these malware families. The comparative analysis gave an opportunity to draw a relationship chart between the different malware families, showing strong correlation with previously known intelligence, and adding a couple of new relations. The final purpose of the comparative analysis is to understand the strengths and weaknesses of our enemies in the cyber warfare. The more we know about them, the greater our chances are for successful defense.
10.5446/18831 (DOI)
Hát a rézmenés van egy hagymát. Hát ilyen hagymát muskodik,<|no|><|transcribe|> appearances мыk感覺 ''higyel hoe Judea'' hogy ígysove eléggé megnézzük ne compt-skizki az doivent a szirületet. Abdullah megorgonверző is né scream zob discour A campok tartását minden finanálamát beszélrelesz, amikbolták, разрövben. edgekllen muszdopste washing plant. RJH Arc flooding-katany,� nati premium Oreo chipartonifierszérős arrested Mother simple AT resort Ez nem 추천 ding meat, hogy amikor egy elegi adott, állönsz egy detalp 마� Gotth birds datt. Kös increased Athalmik melyik emberek tőzött a educating Xinkej fairhozrun arra. Már Iranianem érdekes ezt a folykatban é squats. melyik a Choiceikesf명et leszNonik-ichi sud El slot died e ját Infinite Affair első np. a b saturationfr generating Miért fehim firmware-or a van az outrogyended果ita Lest 37-set b Iиков se nem áll policing kjm speaks PCR-edkel, meg Future-edkel is gatheringban elkészít, említettünk nekem, hogy ezért alatt meglátnám. Innen az említett elrakított kész benve budszonság megyen pessoa skilled as innocent in democratic countries. This is probably a cause they're down Screw but we do not know. We have so long to manage that haha.. This was the first statement provided by the company- Well it was not provided by the company actually because the were hacked into. This was actually stated on their own Twitter page. kifogad lehet volna fel a posteran Majd fél só cigarette electrolyszor gyá� была konfigulgu egy igazságban. Talajdunk figyelni civilizationgradesb��도zat. Könyzes meetco youtube hoffe azokat, amit kérdettem, hogy hagydok, hogy a félrejét is történik. A szemműnőségek, amit a szemműnőségek is történik, az a szemműnőségek is történik. A szemműnőségek is történik, a szemműnőségek is történik, a szemműnőségek is történik, amit a szemműnőségeket igazán elváleen circumstancesmű由intségen szagá GPTül implicá Shakt érdekes torra kon готов érdek naturelluşklók gondol stampän k ewrár fő romanticist' a megrémségeket nem Pronounce Or SOG hogy annál v stoodooli deste關 kínérve, nagyon şövresű co spontaneous-annex is elunozni especializ Arduinoγάlystokosabb forlalban. Zilvendenek használből, amikor gyöngőemek dead Norwegium és Zöld Hollaltonez fogja lustat és tan��va, amikorHappyíth wenigáltak, És ide ne belienak azakernyel Egy ngeptállt Greek E HTMLaley Z aperture Metis Én ranked magyar szél animation elővedett a hely!! Egy át sonstá rhizc és a futuristic automát kitak alstage langyjugánworthak flapsיה Shame osớb🋅 A zájászból a zájászból álltott, ha nem álltott, akkor a zájászból álltott, és a zájászból álltott. Tehát…一 dalán folla a helyre, poundálkolyát, Programmályát. Tudod ki a ke看一下i akit tartat waiver állt, hogy olyan kb kur Früháry faktorúved Sor Davita, mint the service could do build it up anonymizing their applications, which help them hide the traffic and the person behind the whole thing and you could control everything from a single dashboards of panel, collect all proofs and monitor a pályоды és állal dislikeásai geférésre. Ezért wasként, a Prés PhD-ben is. Fogsd Josie gondol �adutuk, hogy kocsanakon hozzák egy respons onyjon Madeleath,jas lehigyo trucket vagy sosem fiamatolták. Akkor egy weiterbessége, és egy Farad dát is megy touhoutál eclipsebb, nem Firenbody-jéатьсяyoung Farad-kotja, Vagy ha ismett ezek a medjáról és parenthiarmas muscles Tahit tetszik, nem embereket szeretünk, hogy ezek ma sem lehet, hogy maga megdöntünk gridöl其áilli adatul itt. Azzal nem lett eru elő, am Nicky της sandwiches víonte scrolltam a poli телефон aktálikus izше,<|ml|><|translate|> Älzés scold L Dákkal és az degradationjády油re, nem van szível, blebe kell jöttimana dating szárok, éshoz aminek egy 채legség trováját és műv despairre is ide lehet fraq vai nemet, így meg Chevy Van ag 요즘 nem adja engezin, azaz gyokat és valamek is eljáts RDL supervisorbe, Sienszteket bolus az időt andr marksazt,......hogy advancesz a blose 어떤 eur weiteres szállasságromb dailyme Linusquis prototypebreaking LIKE dope yine kezőbb glutóval szerencseg nem volt considerd otthon a profot travels едő de ki кли念ek Ne akar az occidental v cook fen gender az édeset nem jön aoboj az összekező Trend Leadówány és az ő örövő sevet, személyt esélyből eléggéen elonyolod, az työ� Govern az utolhámához számyzőségbelülшихre added. Én Verizonat ézőz róla figured protonálni aquency mah apolyan oilp peaks rank BigHit, geflom toyszolvák, eléggé f kick vagy am Kings Weekly bármelemellői Amy Cissóval sajtásokat. A F oneself Rep Negében fél年 padáczykkel jelenti az E sorrow里adediaolást samarägöm spunb highways és isizt scrolltudós is menő Sickerman kémfespink voltat. Ezek a kérdezedő suppliedel kinees vibration állmentes az a sz Lowuzr állmente ír 당 cropcy másroom Oszi eddig denullformedelni az audio-kaffen 21 fő előletekדר which are nevezettessé Sustainablyze így nem felejünk té perceptions chips pitching de kaptam keresztetek a helyetek fragutunk egy tudomreeszer vannak. Hogy fountain megyek a lenne, hogy deathy gay zero 2, késős és szülészt nem. S08tán nyomást, de nyomást mondod, ha tudod bátni kível tetszik és e recommendations dar wearing Bel crafting wheeled every video, Ta van arra eztszak paragrapháig legyen. Ez a szeg requiring wheeled monkeyek Enak LORD L寺blikooughing ésongi betető coure meat-zsi fú Boss underground is tudik, hogy macd képviszerseketokékaly 것을ik és剛áunt kínű Adjust DE- palate-képes a kiszeml amúgy a PiXR-ita jövégére találjunk, kucacháairuljania indulza meg a sikere centsi, az életek, amit Sign Realtám was volt, az ajtó dealing podszorok kust daqui életre. Mostanур Bernas Férz varying rá copék életre utált. Fél szemben 촬영 Bardim d gelatin fuck isr^^ A változási námadnél az a négy, nuláton, így háromék került szállítással. A változási námadnél az a ránraval. Aztán, hogy a változási námadnél az a ránraval, hogy a változási námadnél az a ránraval, de ez nem nagyon jó. Meg vezetdek a szeveseket a ránkban. Én bele tud egy jóorse flankra járni, nekem nagyon triste a közbeli soul organonat, résziáttist debut alap ignite emotional, botl Chrulliar chron Figyelj az, A sokor kifegyelbe hagyjuk,大家都 mutatok, amely soldatok, hogy a c allocation pathetic műsze человекnélnél veldönaldát Talán azter shotokra látszunk az őmetot Megkészítenek a megbáltatot, mert a két megbáltatot kéne a szavadszikával. A személyes képzégeket megkészítek a sémával, és a szémával, és a srácokat az után kéne a személyes képzégeket, és a személyes képzégeket megkészítenek a személyes képzégeket, a személyes képzégeket, és a személyes képzégeket, te is Lehető miatt Ruise-t att followed,庄pozz Pancheload, Kurdáxyi glare, dro Square,ồ arc Vidä Possab fool 8 akarú ajtosolj withOne. Nincsenek, hogy a Z, nyújra,cled bed impedion plantleg múl puzz�sticku welfare F creatures dis Marvel 2 Van kétek проходulat Blял hatрезille WAS o tree na Crash trásnak jettik, hogy jösszes gyegybek vagy ezek Turning, ez egy iezg Monica a f KRL-s aircraft-atok A tачt tetszik zene a betül Lady Lachet a béיזmin resembles facebook gogglesenvertrető едetre így van az episodes, nem vagy jegyszerre quarantinezott, A zsílusok invaluable, nem alatt támának is látszрон. Ez volt volt, és a rez mijnestäabb wrote van nekem itt sifa Ezekis kérdezem a Cracked swan a Link megmondott Signal Edve a 800 benne Hozerérkező dang!) HIATAToken jönetre nézzük ezeket hatszisztít mindig nem bennél miunintelligible és vizető csaczek Fingerp prefix Szobot ey Ésul quelee Ezt legyen külön lévő Ilyen vizető sex beazeld be añosztér há date치ára és mุli assemblebn kiesettek a nagy éjtiv IBD Re quotes amerikábanementsz bullets. Sztetéed mega vágalik impáนThank bulbs merd Threx b collect my own pillow Soja De ha, nem ezt nyomtaj. A b uniqu evőre, amiket képtek separatesok, participated megfejeztetett Om就會 Ölőtt az ürustat, amik a provident crawletik flexm encourages Mint ensuret 혹ok van egy puán wagon régi magyar a V3, és croon állottben a vullerdős Imam a Ottawa-z fogottperüle aloneodról. Rendben. To vér viszony externálni ratio-járokban. cuccuknálom, én úgy Skinny's, exceptions mondom Sinn tricks legyen csak. Pixelrist tombas is emberlek, mint én megomตaligk Seny новый, az egészen r unseren akarszú d punished van, hedgopherem minden kérdésebangffictara, gyúj anhjágyolan természet dialect Gunairs dated is lesz, Itt egy alla aWhy. Sorot primer a kigenan Már Sep Igenne formát Fősürk Ohr Ez egy zene Ezt kредitellem És oly és gyönyör Fősürk A napögyem Mostwo Megpróbálkozom Val Rezett Azissent limiting Alapnem idän szem FAZ bizonyos Pig visere Na különből ez a legb GLU ugyea? EddOSTZ concludes mycelek kerül... és mit tempelsek medik fennekemi serha... Just mást bevриг照voljuk! De utána kb!cam megổvowá tějét! A��alán, az mindig... Én válーペzést.... Menj utána nem joga fel Nem rúzunk az csabu mimet.. Élni... második volt az terület. Tesszük egy miatt easiestmot és az egyik k cachi reméspről, és az aurdamat egyik szunanna, aki等ra menendől pronounbetettlee nagim, Ozmi évtr olduğú üsszes félelés Instagram szel oranges A fola toast lenni, és senki ekkországra mistel Kinelső Tamakoníta játszolwoodani. Holyfiano dr vantagemányi próbálhan alej something from the original application. Ha judgement kezedbe valamit pancsaupatok, nyilván deficitsافroni. Ezt változol a námad és lendingébuildól �magára estaráli. Azért varubol宗á harcia van.,,,,, collegeskenek a hypertension drove, Brief, molt sok a F Hell ש posedak troublesk az Justice de Justice,aku zeneagit,uerődékkal egy együtt lesz, mint azmindezerligê, őket minden entrathms Round, a wattelik Lotulni készülés excellent s tinyitkség csönglarság prepőkozásre, Federal servant presided by swreeteret, a se lehetősége bemutat symptomát, ha mistőjén u sciakára a appszabottzere szoráratikit, illetve játnék, Tv. petá Democrats-et fdy beginning ani a jelyen spritejában tegyatt infinitely és ezt Lord unstable's lygrandadma neked volt.olasen shipsoddóra a CSF-kÁrég és ugyebesszelt ilyen valamint játszott hanemρίztul meg a sใหv 돼요val fogd állni aém 혹ás, És az k finesz summarize-t és elkezdve használni a jurorryot companies-i上,... és az a cum sequ Done Mind ago van undulna a nősí specially láss egy figyeldő helyek, és győztetlorelt a télhelyim és ez kevesek a f fainteline ezek 비� shrine ezekangúgy gondolg policies legyen a legyen. É aggravál요. Jó mut Competition auf volt in AB motherf teller College K陋s, Halló tourantra a megsütött jobb már sincs internet complaint folyamatos és csend agyhar listszatok. Niit a presses szörnyest előzzük mi a latuálak, AZ- самый úgyek lehet leührenekmi winsjáct, drágaulásat tövbeydr gyróled, tájtárkaiäischen chlop-, ma semiconductort, sparklytepszication, pitchpik sonrólch18, és oftalmunkből engem is rejecting狙jabb míg. Factor augahag több volt, mintha nem is ó gives Inspector. M consegue Tanner be no vaad volna Bent soaked o öiked b sorts leopard csinálotlynn Wan Ő kép miatt fell
Nem kell hosszan bemutatni a mára már közismert Remote Control System (RCS) rendszert, amely az olasz Hacking Team cég állami nyomozati szervek (rendorség, titkosszolgálatok stb.) számára fejlesztett terméke. Az a 400 GB adat, amelyet a gyártócégtol elloptak és a netre kitettek, sok gondolkodnivalót ad szakmabelieknek, politikusoknak és a szélesebb közönségnek. Eloadásom három részbol áll: az elso a termék muködésének rövid bemutatása. Milyen rendszer fejlesztettek ki az exploitok célbajuttatására (Exploit Delivery Network - Android, Fake App Store) és a már megfertozött eszközök megfigyelésére (proxy chain). A második rész az Android eszközök megfertozésére használt exploitok részletes elemzése. Bemutatom a bonyolult, meglehetosen összetett, sok lépésbol álló fertozési folyamatot. Ehhez több, vadonatúj 0 day sebezhetoséget használtak fel. Ezeket ugyancsak ismertetem. Az eloadás harmadik részében a „feltunésmentes” muködést szolgáló, az ido elotti felfedezést akadályozó technikákról lesz szó (Virtual Machine és Cuckoo elkerülés, antivirus termékek monitorozása stb.).
10.5446/18830 (DOI)
Akkor én tiszteltek, köszöntenék mindenkit. Szabóáron vagyok, az elgrupnál dolgozom. És most ebben a következő 40-45 percben a kryptográfia az kaptyódó közeli jövőről, illetve ennek a közeli jövőrnek a jelenre való kihatásáról lenne szó. Tehát, hogyha valaki most, mondjuk, jelentjeleg rendszereket szeretne tervezni és fejleszteni, hogy mitre kell már mondom a jelenben is figyelni. Ez előről állis valuktán a szó lesz, hogy kicsit a konkrétomokban is belemenjünk, egy hess alapú aláíró algoritmosról is. Most ennek kapcsán nyilván nem tudom elkérülni azt, hogy valamennyire a matematikai hátteret, illetve egy ilyen konkrétalgurikmusnak a működési logikáját megnézzük, de hát mi van el, nem vagyok fizikus, meg matematikus, ezért megpróbáltam emészthető módon ezeket leírni a slideokra. Miért? Azt a pastaltokat az elmatítságot való, hogy fejlesztül ki, szipográzható szipográzatokat. Vagyjátok ebben meghajned a men сохран litt taut fluffy, másra tették legys Hébe. Nagyon mondom a haj policies �egyő at係 Grow ma nutshell. Most nagyon báloros linkem, ennél因 szól lép Overr след olajás belepróbálom. El unnecessary k") Dewab wilgár leukeis 박sima-تمp Buda valós天ítés Egy kстрélyövet egy penigány Aunt subhez meg miatt A szok spoonful eljut a hely v weakened a catastropheban hoz a Allahu e-maybeWatchauen. És az 70 %-i színpr Pods yerátak Crashzokat, a tit� Tails Side Bunuva újra hagytighaten a abszontos kerületgreatre ölelődött neki. anématítsch domaininetek valás 아버tya. Indozutálod, hogy ki mamat Bla<|transcribe|> néhány klasszív renyu taradul, amelyikcessor countriesében, kiótislejelto meinem ke Jobáciás탄 keveseke és ezt a f...,ny Racquetem-szó vagy append KP, vagy az külön sparkon bármi kapi sz batteries és a félhetséges különbözőség, de az utolsó különbözőség, és az utolsó különbözőség, és az utolsó különbözőség, az utolsó különbözőség, nem egy régi szíves, ajtó навelytek volna elhidj� slah esetett, ahogyan még Choose De ha Találhatok ezt, akkor automatically azionány alatt keresn quizalmat bepr融 cachef alternáció blánh Discover Őhhhh k 내� 36 k surviving a Windows obvious처kezett egy generateset a területet branchat kiszített CuriosityGahr когоnált под lui updated Most facadekkezett az egész fét intestenek sebess painted hétk PAC-t gy Lucien slicint perültvan partition emléket átszott helyzetב� Messiah kvlocba, de a videót dusty operát cler recurring puzzles lenzerül ők k traveling tudatok, ha a k utelségednek vagy絕vés vez suck오�gi gépes. De mi resistance instrumentautisban... Aban gondold a vá firewall,ologist, tha CRember interpretl ties initiated legyen, mint a lesz a lemesek, és ki találj itt az良asá�네ban. Ez egy alternációjperclet. Egy mindeng がg obtaining oled, favour�loc국 kíremek 끝 lesbianeket tényleg az ötlémétekénehényebred. most az ötlépéreFreeK wheelitetél vele isvin neked. de azért kellettek a különbözőségek, és kellettek a különbözőségeket, de nem tudjuk, hogy nekem tudjuk, hogy a különbözőségek is értegetjük, és ha nem értegetjük, ha nem értegetjük. Ha a különbözőségek is értegetjük, őket cikkabogIGH és lá regain- kurva zones semm spoken and they forgot to تعsele Lore Brail and then we had other mathematical problems which we can base algorithms on that will work. Why should they say that the current crypto algorithms are dead? This affects primarily sure aforward the views for the context valorization and performs at a polynomial time 2. franc a csakrocs을 baskets does olyan men納zőcsek a biztos mum abilities? Calendar Héero turned 3. Egy,Ha ez reinforArátérjét. Me maga számítva, certifiediedz jel Titanál lesz comun spiders felyett a kwortume*)ját, más coloring hydraulic generátországm feels studiai és basic assembliesista unity drawn, és öntöm ezek elkre. Úr-eithmetic ugye ezek a Microsoft a törőle képített túl sok kéne nem a speciós adjusted replikában élt, egy lesz egy évesius szónkod channelok, és padsége может akarmára magason시면 bármvak közönöm just kerveséges szín mess joining megحدett Мы nőeder részendre is ezrőlurgius járitt tulajdonképpen choose the Ez a tudőä, hogy a Saudi 용op本桨верjecti yourk Forest is lehet azldet tar När Klerwahim Exzple Neo patand Terror és Panch, gyokorlatil windsóul találtak az autó. A emberek másiket f antakl существú barsó egy- beginning soké journalsz é adaptató ar SB Moszke's Tomás boly бол le gyakorlatil Öechést főnünk egy wounded Visszaújtok el a szomdulra fearsz식t a létrebe. Érdekes broadcasthna, azt gondolsz, hogy a moduló Breeeosathetér collar rondért szabadik meg nem gondol Намonszangok. Volt�ről ez a m便k bodulaet és a mban fogralok egy kówizs Xbox Z múli szí solamente az alcohol motors kial más érerci refused és most measurement fontos extremát így hazánisztet. mamáiques loimatedus surgeries, 9, 5uentes körből bul Wa prestige, 12enn lions in faurettes for Geneim exager diving a gáborulnychnelle Hong Kong isannon is édes a Sí correr Kezett víz... Heciel—— 3-ről Rede-os輝kel, 5-ről Québec közel... Hogyvan Leszverül ilyen snálva. Ül сталаuchGreenmehe-k. egész kipro sürçüket. A nagyszerű aggó néhányban fogják segíteni és a cuts vetörностьet ke発csamente bánni le az aggóersférteni, fiworkgruppä M choreography togetherнымretekbe is見てb Durian nagy Parr�em meg Parkourings Mетek. A recognized Faz instinctesen, amppol van lodge reaction Santa That I know he adleत 꾀ischer vermélé Doing yeah an progressively library acceis disease Varot Hez Bear Greece hospital utace vezett, a hángéri volna játni van Lovent. A másik éve extracting réhotáot sz zatessa, és hely computers helyUU h buzzin 4. és az emberek, az emberek약 s redesignsb aggression a korot assumed az ö sword, de azt tanulom, amit kell ab betömsz korogaték is gyerekezmeket anyagán logs és nem magát szotionalnalоч techniques the minegtech is. Egyes nyaorba az ik glaze考ó khám maintains baltázis mel прекрас a LOT de nog muscular해<|kk|><|transcribe|> Vagyutunk össze, hogy szemm telescopes heter Naturek és MZH, Szövegetebb kicksz 계zem és a akrát rendező bulárba emléke examiningditter был egy pickupven jewel le akar obvious kezébe c��ich Valuria klubot a tubes fou jeไ 41. És amikorük a kul Last bakerban is arra magyarátam putálralok habían, дрond caregivers, appeal讚 fading professionally recorded ezer egy vagyound dőtök elcs собой sulnaq veteliek na neubby Nem munka a str cámara minimizing computation, so liftet is, mostán eml Likewise aument különhead authentication. tehát azt mondta, amikor blunt recording ja kell, legyen sz Country És a klívszet commodityban is egy kêméskétivat és nem eleg viazd megvan. Visszany아요b存ci daughterok, hogy dolgoz be sz runwayakér appears például Roy Golden pakot hassquekésower Ez ez az azorgasítás, azt calak vonatok voltачt taktartni, hogy a testfetszik ezt nem egy againgelt túl. 15. érdemel Éttem a tett инaker rápak drown меняjat 230-t k toll és olyan,- ami s küldel counselorim max gy depiction consultation magad triggers, kb skoromásra erre is mi a HWF-t来úgottak a Hibinar adról, Aztán kérdünk, hogy a LWM-tállal, a különbözőségek, a különbözőségek, a különbözőségek, a különbözőségek, A b-mit paralálhatok a n-k showedاز quienes NOTEZSUN. neighbors percepene a PET-talá Tang prevalent, abbzépen iráné kör ske皆 הנ épre zöldi�i egyengem bubble Ti fog értekem a umas adott reflulat tejet. Még gaztát kell lesz inspiredig. Számí Wiseből, hogy recreation Road to the worldimal aircraft is n Me postszt kb�átésre.ệvel romá mint egy kb felnos szálokra. szabályat óvatatokkal estéz kell majd Love Hollywoodot Ez nem közözt dönt Election Documentation-at. A 5. 09 egy új arcámbáigschools foför jailt ay showed a bet uncleladım vectors wynika suítingprdmű Wallach together structure. majd itt kis Whatsapp-urban, amiket mi van így nekem a FiatalASH, az az片k a nagyper neuroscience circusra vettelme�at k internationally � palaceierbe vesz impulsãletian bizonyolt képpézőjeыл adapting that were THT-be menni arra. De az is arcékok vamosozott meg aprendik, stuffed házleti, confirming fálatok, pont기에 erős feltőz Other Jant timer, és a hém azok táj be vocádot因olva aku tett, etésztek is kapcsolatút más過來 Marie-Klauss disagreementilot lehet. Nagys�回 Ez a parámiták 8 volt al ​​nagyan,akan nekem az erősége anybodyis 255 fúrat, akkor kívül aild mindenpieceimportantú inkarod porunk protógú blindness. Lee Rahman jöj st seat poekefr babies commentedenегek fölve Shi –a cheesy cableıyorlar fuckr tv niet piafill poemon A какon az a feloftлин easiestes etetette voltimbogált, bár giving, vagy úgy mest de Rhezen is, b Naturec's Puisón. Vagy mileszerre a gorsajsam breathing ainálunk. majd tubkártyára alig haztal. Aki lehet nekem politikai certifikágot őket idő az � Bacon szülelőre tess 심'! Ez egy men<|su|> перепЭritek. Egyszerintet nem tudassítsák mi hanem. más izon Shh田-et ag offsetúját hozzátuther wall, 이렇게 egyrészt magas ideaswheelet és ny állítanok egy mélyes rep Horror Adventurea a klort az kressetekában, ami hozzá, hogy elér Justin a tetszére is kérlek. Négy más szóra, hogy a tetszére is kérlek, hogy valami kell kérlek, hogy a tetszére is kérlek. A különségek kérlek a tetszére. A tetszére is kérlek a tetszére, és egy különségek, és egy különségek, amelyik a tetszére is kérlek, és egy különségek. Minden a tetszére is kérlek a tetszére, a különségeket, de nem az egymásra, hogy a tetszére is kérlek. Jó, az egymásra, hogy hagyjuk a tetszére, ez az időt, hogy az egymásra, hogy a tetszére is kérlek,is az aléján yetenek, hanem a csempestvégeket használhat透, isto vagy Monthly,йévingt, Jewsd, vendőzés és igazán semmimint felé információul. K tenisztetett... és a néhárecsíta也agából képes mutalásla egyébként mert feliptálják a h texture képszíta. A fusszikában legyen a háskt kissingt, és a fennag musician állniazíta, Well, depending on the data to be signed, this is the interesting part about how can we validate this, how can we verify it. In the case of winter mids equal eights, we have to hash 255 times, but in this case we hash 67 times. F reről égéredés... É bitternessbildrekok a econek cracksabban...me küzdessükidentю政kkal a sze Claire任ges név cosmetic Obama fingernajára, Még ennyi pag áp sou 여러분들 estált, ami az utál AW razsíta honnan, mert félkezel Cor rules,ban k A különbözőségek és a különbözőségek is kérdik, hogy a különbözőségek és a különbözőségek is kérdik. A különbözőségek is kérdik, hogy a különbözőségek is kérdik, hogy a különbözőségek is kérdik. A különbözőségek is sincs gondolgaraisz, de haabbá a napvár's az lesz szén slej, hogy szervenel eljékezik. miért ha 1 szinten Plaza kérdünk, de a másd– tálok is fogjuk csinálni. És az elég az utol hackszáogal bulk�zni, GPS-t bele légzervez unsurejük és az utolnek egy fю Taiwan-i tovább complementben kezelmeknek el egy sz清 alamgot. adjusting, exercise...弹isd készült r execute Te is sokbelül, amikor ahogy érteni mag thiára..
Edward Snowden szivárogtatásai, illetve a D-Wave Systems és a Lockheed Martin vagy a Google közös ügyletei miatt az utóbbi idoben elotérbe került a kvantumszámítógép és a post-quantum cryptography (pqcrypto) témája, már a szabványosítók körében is: IETF RFC draft dokumentumok, ETSI jelentések születtek a különbözo pqcrypto lehetoségekrol, a Shor algoritmusról, illetve a jelenleg még használható RSA paraméterezés követelményei is szigorodtak a BSI útmutatóiban. Azt tudjuk, hogy léteznek olyan kriptográfiai algoritmusok és mögöttes matematikai problémák, amelyek a kvantumszámítógépet használva is erosnek bizonyulnak, azonban ezek felhasználásáról a jelenlegi X.509-alapú, CA-hierarchiákhoz szokott világban (amelyet az eIDAS EU regulation jogszabály is eloír) még kevés tapasztalat van. Az eloadásban az egyik hash-alapú aláíró algoritmus (LDWM, pqcrypto) tulajdonságait, felhasználhatóságát mutatom be X.509-es adatstruktúrákat használó környezetben.
10.5446/18793 (DOI)
I started with basically telling you about how basically nature makes very selective compounds by using complicated organic compounds in which basically the combination of scaffold, functional group and stereo centers basically code in a way for a three-dimensional structure. And that is similar to how basically nature codes for the structure of proteins by having the information in the sequence, amino acid sequence. With organic compounds, nature uses a more complicated approach to basically code for the structure and function of these compounds. And we just basically do it in a different way. You take a metal center and the coordination bonds, the co-ordinate bonds, they basically, more or less, of course, in combination with the structure of the ligands, they code for the three-dimensional structure of these compounds. And when you look at the space fitting model of this PAC-1 inhibitor I showed you and Geldanamythim, you cannot say that the one is more complicated than the other. They are both complicated three-dimensional structures, have very defined shapes and therefore also very defined biological activities. And so that is basically our approach and we apply to protein kinases because there are so many protein kinases, more than 500 encoded in our genome. It's a huge challenge to inhibit individual kinases, but not the other 499. And of course we want to apply this concept also to other enzyme families where you have selectivity problems like proteases, phosphatases and so on. And that's also of course very exciting and we hope that in the future that such scaffolds will also be used by, in chemical biology more frequently and will also find their way into a MetcAM, into pharmaceutical industry. We are convinced that in 20 or 30 years such metal scaffolds will be very common. Scaffolds for the design of drugs and we will not be limited anymore to this very simple and not that sophisticated organic compounds. And maybe at the last minute I want to maybe point out one problem and I told you that we have of course this sophisticated metal center that gives us all these options to build structures. We have 30 stereosomers in the worst case scenario with 6 monodentate ligands, but the question is how can we control the formation of these stereosomers? We cannot synthesize compounds, make all the 30 stereosomers and then separate them. So the reason why actually this is quite an enormous challenge is also we have to basically find ways to control the stereoselective synthesis of such compounds. And you are aware of the fact that in organic chemistry there are hundreds if not thousands of groups caring about the stereoselective formation of one stereoisomer at carbon over the other. And now we are saying we need methods that allow us to form one stereoisomer out of 30. That is of course a problem that is orders of magnitude actually larger and to give you an example this simple compound is just a ruthenium compound with 3 bipyroline ligands. Actually this compound can form, it's a very simple compound, high symmetry, it can only form two stereoisomers. One so called delta enanthomer and one london enanthomer and basically you see there is basically kind of a screw sense here, so you have a right screw and a left screw. Until recently actually it was not possible to synthesize selectively one enanthomer over the other. This seems like a simple problem, but everybody had to separate these racemic mixtures. So people made these compounds as racemic mixtures and then they separated these racemic mixtures by carol method, carol counter ion and so on. And that shows you how far we are behind in the stereoselective synthesis of metal compounds and we think that stereoselective synthesis of metal compounds have to go hand in hand with of course evaluation of the biological activity. This is the same as organic chemistry, if we would not be able to do stereoselective synthesis of organic compounds we could not make complicated bioactive organic compounds. So we started a research program in our group that really aims in controlling stereoselectivity and we saw this is actually a nice test system. Can we find ways to make this compound stereoselective in an enanthopure fashion? And we developed actually very simple strategy that is used actually in organic chemistry every day and that is the use of carol auxiliaries. So the idea is you want to use a carol bidentate ligand that is in the coordination sphere somehow and then controls the incorporation of additional ligands, so the ligands exchange reaction and can later become removed. That is really the same way as we use carol auxiliaries in organic chemistry. The problem that we had to overcome is with the metal such as ruthenium and we work actually with in all our compounds we are interested in chemically and substitutionally inert metal compounds. If you use bidentate ligands they stick very tightly to the metal, so once you incorporate them into the coordination sphere can you get rid of them later without basically compromising the carality at the metal. So and my student Leigh and Sean they actually developed something very nice which we published this year in Jax a few months ago. We use these so called zelecyl oxazolines. You can see here an oxazoline and a phenolate and it coordinates as a bidentate ligand to the ruthenium. So here we have an O- and a nitrogen. And in alpha position we have here a carol group which is derived from a reduced amino acid and amino alcohol that forms a soxazoline. And as you can see here this isopropyl group really basically comes in close proximity to these two coordination sites. So if we have four leaving groups what happens is that the first bidentate ligand basically tries to fill the coordination site that is the farthest away from this isopropyl group that is this coordination site here and then the remaining coordination site is basically left for the second bidentate ligand and this way basically the absolute configuration at the metal is determined and luckily with this ligand we can remove this ligand by adding acid and protonating this oxygen here decreasing the coordination strength of this ligand. So in presence of TFA just a few equivalents we can remove this ligand and replace it actually in a one pot reaction directly with the third bidentate ligand and we can obtain this compound in a basically more than 99.5% ER ratio basically practically in n-sepure fashion and that's shows you basically how we think about doing stereoselective coordination chemistry at metal centers and we are actually very excited about it and our goal is to be able within the next few years to synthesize stereoselective compounds that are as complicated as for example this compound here FL which we call FL 172 and that's still an enormous challenge and we are not there yet. Okay I hope you enjoyed this short summary of our efforts and I hope that more people will use the future metal compounds for the design of bioactive compounds in particular enzyme inhibitors. Thank you.
Prof. Meggers talks about an approach to synthesize asymmetric coordination compounds utilizing chiral auxiliaries.
10.5446/18772 (DOI)
Greetings from the Megas Group. I would like to give you now a brief overview of the research that my group is doing. And actually this is a very exciting time. For chemists we have sequence of human genome, we know all the gene products of our genome and now of course a very important goal is to understand and to control the function of each gene product. And that's where the chemists really can have an impact. For example we can make small molecules that can interfere with protein-protein interactions, we can make compounds that can interfere with transcription and translation of genetic processes such as small molecules that bind to DNA or to RNA or we can make small molecules that bind to enzyme inhibitors and knock out the functions of these enzymes. But think about it, I mean what does this mean? That means you want to make a molecule that really just interferes for example with the function of a single enzyme, one enzyme out of 25,000 proteins together with all the DNA we have in the cell, the RNA, the membrane compartments and so on. So this is an enormous challenge of molecular recognition. And I claim and I think a lot of people will agree with me that the typical current small organic molecule can actually not fulfill this task. So if you think about it, a typical bioactive organic molecule is a heterocyclic compound that has substituents on the periphery and the problem is it can adopt multiple conformations. And the one conformation it binds to the one target and another conformation it binds to another target. So this promiscuity of conformation basically leads to this unselective binding. And the question is how can we solve this problem? That's what my group is really excited about, this problem of molecular recognition. And if you think about solutions actually you can look to nature. Nature actually found ways to deal with this problem. You probably know that complicated natural products, they have very specific biological functions. They sometimes really bind just to one particular target. And in order to understand how these natural products do that, it's actually maybe important to have an example and to learn about how they interact with their respective targets. And I have here one slide that shows the natural product, Geldner Mycene shown here. On the left it's already fairly complicated microcyclic compound, multiple functional groups, multiple sterile centers. Actually quite difficult to synthesize. And this compound actually binds selectively to the N-terminal ATP binding site of the Hitchcock protein 90, HSP90. And it does so by adopting the C-shape. And in this C-shape it binds to this very globular deep pocket. And when you look at the space filling model of this compound, how it binds to this pocket, it is basically like a globular shape with functional groups basically presented on the surface. And that's what it comes down to. We have to basically find ways to make defined globular structures that have functional groups presented on the periphery in a very particular way. And nature does it by basically designing these complicated molecules in which basically the scaffold in combination with all the functional groups and the sterile centers basically in a way code for a three-dimensional structure. And we don't, of course, we synthetic chemists, at least my group, we basically hesitate to make these complicated molecules because the synthetic challenge is enormous and it's difficult to make maybe kilogram quantities of such a compound. So we were thinking about other ways how to basically make compounds that are globular, have a very defined shape and because of this can then bind very selectively to a target. Because you have to keep in mind that an active site is globular. It's typically a pocket. So you want to design globular molecules that are kind of rigid or have a defined shape and compliment the shape of the, and the functional group representation of the active site. And if you think about organic chemistry, if you just have a benzene molecule or a heterocycler, a cyclohexane and you put substituents on it, these compounds are not rigid, they are not globular, they are basically flat, this more or less, and they have flexible arms. And if you want to make something that is globular and has a defined shape, you typically need to fuse rings together, have bridging systems, have introduced stair centers, other functional groups and so on and so forth.
Prof. Meggers talks about the traditional approach for synthesizing an organic compound for the use as an enzyme inhibitor and the inherent problems.
10.5446/18771 (DOI)
Today I've been asked to give you a little bit of an idea of some areas of chemistry that we're involved with and I thought it would be nice to give you a bit of a general rundown of the advances that have been made in the field of main group chemistry, specifically over the last 20 to 30 years or so. These advances have been very rapid and so rapid in fact that this change in main group chemistry since the early 80s has been termed the renaissance of main group chemistry. It's developed into an area now that I like to call a modern main group chemistry and so today I wanted to give you an idea of why these changes have occurred and what potentials there are for the future of main group chemistry. But I think to do this it would be nice to have a little bit of a look at the historical aspects of main group chemistry and I've looked into this a little bit and I've found that there are some direct and indirect links with the chemistry department here at Marburg and so maybe we can talk about those as well. So main group organometallic chemistry obviously involves the chemistry of compounds containing a bond, chemical bond between a main group element and carbon and this field began actually 250 years ago this year. So the first main group organometallic compound that was prepared in 1760 and this is in fact the first example of an organometallic compound itself if you consider arsenic as a metal. It's actually a semi-metal I suppose but this was prepared by a French chemist called Catae de Gassicourt and he for whatever reason decided to react potassium acetate with arsenic oxide and this generated a deep red foul smelling liquid that fumed on exposure to air and this is commonly called Catae's fuming liquid and it was later found that this liquid comprised several components two of which were tetramethyl diarsane so that is a compound containing an arsenic-arsic bond with two arsenic centers bonded by two methyl groups. The other component or main component was the oxide of this compound and these were called cacodil and cacodil oxide respectively and these names came not surprisingly from the foul stench of these very poisonous compounds and there's actually a fairly direct link between cacodil and cadae's fuming liquid with the chemistry department here at Marburg because in the mid 19th century a very famous professor from Marburg University Robert Bunsen was looking into the components of this fuming liquid trying to understand what the components of this liquid were and I think Bunsen's well-known chemistry and notoriety attracted researchers from around Europe to come and work with him and one of those researchers was a young English chemist called Edward Franklin who many of you will know that name and Franklin came to Marburg to carry out his PhD which he did in 1849 I think and before he came to Marburg he was carrying out his own research and he was looking at a related research he was looking at the interaction of methyl zinc and ethyl zinc with zinc metal and he found that these reactions generated in both cases mobile liquids which were pyrophorics that they spontaneously combusted in air. He didn't really know what these compounds were but during his time at Marburg I think and after he realized what they were and he found that they were actually diethyl zinc and dimethyl zinc and if any of you have used these compounds or zinc alkyls in general you will know how remarkably pyrophoric they are as soon as they see air they burst into flames and I suppose as an undergraduate chemist I was always amazed that in the mid 19th century people could handle these compounds and study them. We deal with pyrophoric compounds all the time we have to handle them under inert atmospheres we have general techniques for doing this using oxygen free nitrogen for example or oxygen free argon and Franklin himself had to use inert gas as well and that was another thing I found remarkable about this era of history was that the inert gas that he used was dihydrogen so you can imagine the potential here for explosions in his laboratory and I'm not sure if he actually had any. So this was in my view during my undergraduate days was an interesting era in the history of main group chemistry if you want to call it that if you consider zinc a main group element which many people do. So these compounds were really the forerunners of Grignard reagents and we all know that Grignard reagents are formed from the reaction of alkyl or aryl halides with magnesium metal, they are magnesium organometallic compounds they've had vast importance over the last 120 or 110 years and they were developed by Victor Grignard in France and I think he won the Nobel Prize for this chemistry in about 1912. And again in my undergraduate days I carried out a undergraduate research project on the trying to develop polygrignard reagents so I was quite heavily involved in Grignard reagents and their synthesis and it surprised me at the time and it still does that the mechanism of formation of these incredibly important reagents is still pretty much unknown. There are a lot of theories out there on how they're formed and some evidence but the definite mechanism of the formation of Grignard reagents is unknown at this stage. But one theory suggests that intermediates in the formation of these compounds contain magnesium-magnesium bonds so they're alkyl, magnesium-magnesium halides and so formally these compounds contain magnesium in the plus one oxidation state. And so this is how we became involved with our interests in low oxidation state magnesium compounds 25 odd years later. So that's a side and I'll talk about that in a minute but the chemistry of the main group elements was becoming to be developed up to this stage the early part of the 20th century and then it was rapidly developed in the first half of the 20th century. And I think by the third quarter of the 20th century it was pretty well understood what main group chemists can do, how they behave, what their properties are and they became perhaps a little bit boring I would say. All of their properties were known. There was no surprises on offer that the compounds containing the S and the P block elements contain those elements in either one or two oxidation states depending on whether the valence S and P electrons were involved in bonding or just the valence P electrons were involved in bonding. You could predict the coordination numbers of these compounds very well and a whole range of rules were developed or written if you like to pigeonhole the properties of main group or especially P block compounds. And one of these rules was the so-called double bond rule which told us reassuringly that you could not form compounds containing multiple bonds between the second and subsequent row P block elements. And I think a general belief in this rule led to a stagnation in the chemistry of P block element compounds in the mid to later part of the 20th century. And that was in contrast to the chemistry of transition metals which really rapidly developed in the second half of the 20th century. Main group compounds are nearly always colorless. They have one or two oxidation states. They really don't behave in any catalytic behavior whereas transition metals have nice colors. They have variable oxidation states. And we know that these properties are derived from their partial filling of their valence D orbitals and the close energy spacing of these D orbitals allows these compounds and led to these compounds being useful in for example catalysis which is a hugely important area of chemistry where transition metal compounds have been used. So this belief that main group compounds had perhaps not very interesting chemistry persisted until the 80s. And then I think in 1981 in my view anyway this is where the change, this renaissance in main group chemistry occurred. And three important discoveries were made in that year. They were firstly the development or the preparation of the first compound to contain a phosphorus-carbon triple bond and these are called phosphoryl kind. And this work was carried out by two German chemists, Verne Ull and Gerd Becker. I think both of whom have associations or have had past associations with the chemistry department here at Marbeau. There was the development of a compound containing a phosphorus-carbon double bond. This is called a diphosphene. Formally has phosphorus in the plus one oxidation state and this was prepared by a Japanese chemist Yoshifuge. And finally the first example of a compound containing a silicon-silicon double bond was made in 1981. So this is the silicon analogue if you like of the well-known alkenes. And this was prepared by a US chemist Bob West. So I think once these compounds had hit the literature if you like, the general belief that a low oxidation state and a low coordination number, main group compounds, they couldn't be formed, was thrown out the window and there was an explosion in the area of the development of these compounds. And obviously these species are very reactive. They're generally, they shouldn't exist thermodynamically. They should either oligomerize, they should disproportionate, they should react with oxygen etc. So to prepare these compounds and to stabilize them kinetically, generally a whole range of very bulky alkyl, aryl, amido ligands were developed to attach these ligands to the metal centers in these compounds to stop them oligomerizing or disproportionating, for example. And this allowed the development of a vast range of metal-metal bonded compounds, element-element bonded compounds, huge clusters containing in some cases up to 84 gallium atoms, chemistry developed by Schnerkel's group at Karlsruhe and very fascinating compound types. And I suppose some people see these as mere chemical curiosities, but I think they have, obviously their high reactivity will lend them to having a number of applications. And this is starting to develop now and I might talk about that in a minute. But because all of these new compound types were being developed, for example compounds with silicon-silicon triple bonds, germanium-germanium double bonds, iron or formally, well not formally, but what were described as iron-gallium triple bonds, new theoretical methods and bonding models needed to be developed to try and understand the bonds between the elements in these compounds. And a whole range of theoretical techniques have been developed to analyze the bonding in such compounds. And this has really developed quite well, except it's not fully developed yet and the analysis or the interpretation of the bonding in a lot of these systems has led to some vigorous debate in the literature, let's say. So it's very interesting to read a number of papers, some people have views on some compounds and the bonding in these compounds. And one example is this iron-gallium triple bonded species that I mentioned. I won't go into the specifics of the compound, but some people see it, for example, as having an iron-gallium triple bond, some as an iron-gallium double bond and some as an iron-gallium single bond. So you can see how controversy can be generated in these systems. And again, here there is a direct link with Marburg in that one of your current professors, Gernot Franking, is I would say one of the world leaders in analyzing the bonding in multiple bonded P-block compounds using theoretical methods. So as I said, the chemistry of these systems has developed rapidly over the last 20, 25 years or so. And also, as I said, they're probably viewed by the chemical community as largely being just chemical curiosity. It's very interesting compounds, but not with much use. And that has really changed a lot over the last, I would say, three years. There are a number of groups showing that these compounds or the high reactivity of these compounds can be used for useful purposes. And I would say some of those groups are Phil Powers group at UC Davis, Doug Steffens group in Toronto, and Guy Bertrand's group in UC Riverside. And these groups are looking at using low-coordinate, low-oxidation state main group compounds to activate small molecules, for example, dihydrogen, ammonia, and ethylene. And this has never been done before for hydrogen and ammonia. The first activations of these important molecules with P-block or low-oxidation state P-block compounds really only occurred in the last three or four years. And it's now been shown that in some cases these activations are actually reversible. So I think it's quite obvious that such low-oxidation state P-block compounds might find use in catalysis. And indeed, someday they might even be able to replace transition metals in catalytic processes. And we all know that most of the transition metals used in catalysis are the very expensive ones, so if we can replace those with extremely cheap P-block element compounds, then there's a definite use for such compounds.
Prof. Jones (Monash University, Australia) gives a short historical overview over the developments in main group chemistry.
10.5446/18747 (DOI)
A few years ago we thought about a concept, a new concept to basically design globular defined structures by using metal scaffolds to make small molecules, small molecule inhibitors for enzyme. And you can see here in a representation here metal compounds in which the metal has just a structural role. It basically helps to basically organize the organic ligands in the active site and is basically really like a structural center and the exciting part for us about this is that of course metals can have coordination geometries that differ from the coordination geometries of carbon where you are limited to linear trigonoplana and tetrahedral coordination spheres. And I think we have shown over the last couple of years that this is really a promising approach and we designed actually compounds that have potencies that can really compete in their classes with the best organic compounds available and sometimes in some cases we think we have compounds that are better than the best organic compounds for a respective target. And I want to show you here maybe one design that we developed a few years ago in which we developed protein kinase inhibitors by using a natural product as an inspiration. You can see here on the left side the natural product star spherin which is an endodocarbazole alkyluride. You see here these two indoles and then you have a carbohydrate moiety and this compound actually binds to a protein kinase ATP binding site in an ATP competitive fashion. This heterocyclic moiety slides into the active site and can form two hydrogen bonds to the so-called hinge region of this ATP binding site and then the active site opens and this more globular carbohydrate can fill the area where the ribos moiety of ATP binds. And we basically used this as inspiration and designed compounds that have a similar shape but are simpler in their design and contain a metal compound as you can see here. You see that this what we call pyridocarbazole is quite similar to this endodocarbazole. We just basically replaced one endole for pyridine that gives us two coordination sites and we can introduce here this ruthenium fragment. You can see that this cyclopentadienyl half sandwich and the CO and this compound actually turns out to be a very potent inhibitor for this kinase GsK3 actually by a factor of 10 fold better than the natural product and over the years we actually improved the design and we ended up actually with a compound as shown here that has actually KI that is less than 5 picomolar. So this organometallic compound is actually by around 4 orders of 3 to 4 orders of magnitude more potent than this potent natural product and it's very selective for this particular protein kinase. You can even see here a coca-sid structure of this organometallic compound with this kinase GsK3 and how this compound really complements nicely the shape of this ATP binding site of this kinase GsK3. And you can actually I think you probably agree with me that this is a unique molecular structure that you probably cannot mimic with purely organic elements. You have here this ruthenium that really basically sits on the cyclopentadienyl moiety in the middle and basically builds multiple bonds to individual carbons of the CP ring. So that was actually for us a very exciting initial direction in this overall project and encourage us that really using metals as structural scaffolds is very promising and I can tell you actually these compounds are chemically very stable. You can they are not compromised in any way in a biological sample. They are stable in presence of soil. They are stable against water, oxygen. So such compounds basically are as stable as organic compounds. They just basically differ in their color instead of being white or yellow. They are maybe purple or black or sometimes blue or green. So that's the only basically difference that is obvious. But that looks like a nice system but we knew that these half-scentage compounds are somehow still limited in their structural options and we were really actually we thought it's important to move ahead and to basically expand this towards truly octahedral compounds because if you have octahedral metal center you can form six bonds. And that's kind of beautiful because do you know a carbon that can form six bonds? Do you know a carbon that can form octahedral coordination geometries? You cannot. You cannot form stable organic compounds where a carbon can make six bonds. But a lot of metal can. And if you talk about roussinium, roussinium forms very stable octahedral coordination geometries and so we want to use roussinium compounds or also in some aspect, radium and osmium compounds and rhodium compounds as basically mimics for octahedral carbon. And the reason why we think that such an octahedral coordination sphere is so promising and interesting for making small molecule inhibitors is because think about it. An octahedral center that has six monodentate ligands can actually form up to 30 sterile isomers and here you see listed all 30 sterile isomers. A carbon, an asymmetric carbon can just form two sterile isomers. So you go from four substituents to six and the number of sterile isomers increases by more than an order of magnitude from two to fifteen. And these number of sterile isomers, 30 in this case, basically is an indicator of how sophisticated such a center is to basically build globular structures because we have 30 ways to organize the six groups in the three-dimensional space. So very sophisticated structural center. If you take two of the centers, you can make 900 sterile isomers. If you have six centers, you can form in theory 27,000 sterile isomers. So the number of sterile isomers is in a certain way an indicator of complicatedness. And when you look at complicated natural product, that is actually you can basically transform this. If you have a complicated natural product and you look at the number of sterile isomers, that gives you a certain idea of how complicated the structures and how many sterile isomers you could form. And so that is basically our hypothesis for making such octahedral compounds. And that was the focus of the last years, in particular since we are in Marburg over the last two years. We really focused on moving away from these half-scentage compounds to octahedral compounds. Half-scentage was nice because they have a lower symmetry, they don't have so many sterile isomers, and it was a good starting point. But now we really want to go in complicated octahedral compounds and want to explore their potential to become inert bioactive compounds that can do really sophisticated molecular recognition. And I want to just give you one example of a recent design that I think shows very nicely how basically powerful such an approach is. We have a collaborator, Ronald Marmostan, at the Wister Institute in Philadelphia. He is very interested in this protein kinase pack one. And so what we did is we took a small library that we have in lab, a standard library of organometallic compound and screened it against this kinase and found that this compound here, NP309, was in this library the most potent compound for this kinase. This scaffold looks familiar. That is basically the GSK3 inhibitor I showed you here on this image. And so this compound NP309 turned out to be actually a quite decent inhibitor for pack one, 1000 nanomolar. So one micromolar IC50 that's considered a very nice primary hit. However, as I told you before, this scaffold very potent for GSK3. This compound in particular OH group on the indole, fluorine on the pyridine basically has an IC50 of 350 picomolar for GSK3. So you would imagine that you can never modify this scaffold and make it selective for the kinase pack one over GSK3. Seems to be mission impossible. However, anyways, we went ahead and Jassna from my group and Ronin's group, she co-crystallized this compound with pack one. And you can see here the crystal structure in particular what was exciting for us. And we saw all the hydrogen bonds that this compound basically undergoes with the active side as we expected. But however, when you look at the cut through the active side, you see that this active side is very open. And the CP ring here, shown as a space filling model, does not undergo any interaction with the active side. It just basically hangs there in a free space and can just not fill this large active side. So we thought that's actually a great model system to investigate the power of the larger octahedron compounds. And we just use this lead structure and replace the CP for this large bidented ligand and the chlorine. You can see in this space filling model that this way actually blow up the structure. And indeed we found by doing so actually this compound initially very selective for GSK3 because after this change, this compound actually is a 10-fold better for pack one. And suddenly it's basically not an inhibitor for GSK3 anymore because it does not fit into the active side. And we back this up by coagulate structure again. They're done by Yasna. And you can see all the hydrogen bonds. And what is most important, you can see here again in this cut through the active side that now this large compound can really nicely complement this open active side and fills it completely. And the distance here between the power position of the pruridin and CO, these around eight angstroms, they are like a yardstick that basically determines whether the compound can fit into an active side or not. GSK3 is not open enough. It will not fit into there. But pack one is open and it can now undergo all these interactions that are important for the affinity and selectivity. So and I think that is to our knowledge, this compound here, which we call FL411, is actually the best ATP competitive inhibitor for this protein kinase to date better than any reported purely organic compound. And it's not an organic compound. It's just a coordination compound.
Professor Meggers explains a novel approach to the synthesize of enzyme inhibitors, where his group is utilizing the unique properties of ruthenium based organometallic compounds.
10.5446/18741 (DOI)
We've been involved in low oxidation state main group chemistry probably for 15 years or so now and about three years ago we began to wonder if we could extend this low oxidation state chemistry of the P block elements to the S block elements and this had largely not been done before there were no examples of compounds containing two S block element or two S block elements bonded to each other covalently and we wondered if we could perhaps extend the chemistry the well-known chemistry of low oxidation state P block compounds to this area and I suppose again this harks back to my undergraduate days where I was looking at the formation of new Grignard reagents and thinking about the mechanism of formation of Grignard reagents and the fact that magnesium-magnesium bonded compounds had been proposed as intermediates in the formation of Grignard reagents. So we wanted to see if we could actually make such systems and we drew some inspiration I suppose from another landmark study in 2004 by a Spanish chemist Ernesto Carmona who managed to prepare the first examples of zinc-zinc bonded compounds. So formally compounds containing zinc in the plus one oxidation state and surprisingly he made these compounds or this particular compound which has a zinc-zinc bond. The two zinc centers are coordinated each by a pentomethyl cyclopentadienyl ligand, a bulky ligand to stabilize this system towards disproportionation. But he made this compound actually by accident and one of the reagents that he used in its preparation was diethyl zinc, the same compound that Franklin made in 1850. So given the chemical similarities I suppose between zinc, a group 12 metal and magnesium, a group 2 metal and the fact that zinc-zinc bonds could be stabilized we then thought well maybe we could use some of these, some of our bulky ligands that we'd developed to stabilize low oxidation state P-block compounds for example gallium-1 compounds and germanium-1 compounds and maybe these compounds could kinetically stabilize magnesium-magnesium-bonded systems and that's what we set out to do. And the main ligand types that we used were di-, sorry, chelating and enkealating mono-anionic ligand systems such as bulky guanidinate ligands which we developed in our laboratory and also beta-dichetanamate ligands which we found had similar stabilizing properties to our bulky guanidinate. And so we began by preparing magnesium-2 precursors to our target magnesium-1 compounds and these were magnesium iodide systems containing or incorporating the bulky guanidinate or beta-dichetanamate ligands. And we simply started by trying to reduce these systems with potassium metal at room temperature and to our surprise in early experiments we managed to prepare these magnesium-magnesium-bonded systems. I think if we didn't have results early on we might have abandoned this study because we thought intuitively that these systems would be pretty hard to stabilize and pretty hard to eventually access but that proved not to be the case. So we managed to prepare a range of such systems. They are quite reactive as you might expect but they're not terribly air and moisture sensitive so these ligands that we've used to stabilize them really do protect them from oxidation and hydrolysis. They're remarkably thermally stable. Theoretical studies have suggested that the magnesium-magnesium bonds in these systems are quite strong but they're not remarkably so, about 45 kcal per mole. But these compounds in some cases can be stable at 300 degrees Celsius which to us seemed really quite remarkable. And once we had prepared these compounds we thought we must really prove that we have magnesium-1 systems. This is quite a big claim and would not look good if we were wrong. And so we spent a long time trying to prove that these systems had magnesium-magnesium covalent bonds and I'm not going to go into what we did to do that but we managed to do that and we published the work I think in the last week of 2007 in science. And after that time we thought well we now have these systems we really must look at their further chemistry and their properties. And really this work is still in its infancy but we have looked in some detail using experimental and theoretical techniques to try and analyze the metal-metal bonding in these systems. And again this work is only in its infancy but we've used DFT calculations in the theory sense and experimental charge density studies in the experimental sense and this technique allows you effectively to see the electrons between the magnesium centers. And this shows that these systems do indeed contain magnesium-magnesium covalent bonds albeit with rather diffuse electron density between the magnesium centers but we believe certainly a covalent bond. And so we've also found that this although there is electron density between, shared between the two magnesium centers it's really quite diffuse, in fact very diffuse and this has led to some strange properties for these compounds. The magnesium-magnesium bond is what we call deformable. We can stretch it quite significantly for example by coordinating the magnesium centers by other Lewis bases. We can stretch the bonds by up to about 8%. So we can elongate them from about 2.85 angstroms in the uncoordinated dimers to about 3.05 angstroms in the coordinated dimers which is a remarkable elongation. So that's one area that we've been looking at. We've also been looking at the use of magnesium-1 systems as reducing agents. Obviously with this element in this unusual oxidation state you would expect that these dimeric systems would be able to deliver electrons to substrates and we are indeed looking at that with respect to their use as what we call bespoke reducing agents in both organic and organometallic synthesis. So in organic synthesis we've shown that these dimeric systems can act as very facile to center two electron reducing agents. They can deliver if you like two electrons very easily to unsaturated organic substrates. We've looked at many reactions and we've found that we can induce carbon-carbon bond forming reactions, nitrogen-nitrogen bond forming reactions. We can carry out oxidative insertion reactions. We can carry out reductive cleavage reactions and we've seen a whole range of reaction types within organic synthesis if you like using these magnesium-1 systems. And I think most importantly many of the products of these reductions differ from those that you obtain by reducing the same substrates with more classical reducing agents used in organic synthesis such as Sumerium II reagents and Alkaline Metals. And because of this we think that there is a potential use of these systems as selective reducing agents in organic synthesis. And we are developing that at the moment in collaboration with organic chemists. With respect to inorganic chemistry we're also investigating the use of these systems to as reducing agents to access previously unknown examples of low oxidation state P-block complexes. So we're using these magnesium-1 reagents to try and prepare the systems that got us into this area in the first place. And we've had quite a bit of early success and just one example of a compound we've published. I think late 2009 we took an N-heterocyclic carbene adduct of germanium dichloride, a simple compound, and we reduced it with our magnesium-1 compound. This worked and this generated a magnesium-2 chloride system. But the byproduct in this system was a compound that contained two N-heterocyclic carbines coordinated to a germanium-2 fragment. No other substituents on the germanium. So formally this compound contains germanium in the zero oxidation state. So a very unusual compound. You could think of it as a soluble source of this element. So at the moment we're trying to extend it to other elements in the P-block and indeed in the D-block. And we can see these N-hc adducts of the elements if you like as soluble sources of those elements that can be delivering those elements to other reactants in synthesis. And this is something that we've really only just begun work on. There are a number of other groups working on this area of chemistry around the world. Greg Robinson's group in Georgia for example who really whose excellent work got us into this area. So that's another area of chemistry that we've been looking at with our magnesium-1 compounds. And another main area that we are looking at is using these magnesium-1 compounds as soluble models if you like to examine the mechanisms or potentially examine the mechanisms and the kinetics of the hydrogenation of magnesium metal. So magnesium metal reacts with dihydrogen to give magnesium hydride. This is an important reaction. It's a reversible reaction and it's important because magnesium dihydride contains about 7.7% by weight hydrogen. And so therefore it's finding use as a hydrogen storage system in a number of devices such as fuel cells, etc. So it's a reversible hydrogen storage system and in the rapidly developing hydrogen economy, such systems are quite important. So this hydrogenation of magnesium metal to give magnesium hydride has problems. It has kinetic problems. The kinetics are slow and in fact you need a temperature of about 300 degrees Celsius to hydrogenate magnesium metal. And to dehydrogenate magnesium hydride you need about 300 degrees or greater than 300 degrees Celsius as well. So really it's obvious that you can't use these systems in portable devices. And so a lot of work has been carried out to try and improve the kinetics of the hydrogenation of magnesium metal. For example by doping it with transition metals or alloying it with P-block metals, for example aluminium. And this tends to work in some cases and the hydrogenation temperatures of magnesium alloys have been reduced into the hundreds of degrees, which is a usable range. But it's not really known what the mechanisms or the reasons behind this improvement in the kinetics of the hydrogenation of magnesium occur. And so we're beginning to wonder if we can use these magnesium 1 systems as soluble sources of magnesium if you like to try and look at how the magnesium is hydrogenated in the presence of other metals. And this is an area that we have looked at in the last six months or so. We haven't published anything on it yet, but hopefully we will be as soon doing this. So I'm not going to dwell on that. So really that's all I wanted to talk about today. I wanted to give you just a bit of an overview of this rapid development in main group chemistry over the last 30 years or so. I think it's gone from being a rather staid and perhaps boring area of chemistry to what I think is one of the most exciting areas of and rapidly developing areas of inorganic chemistry that is studied today. And I think this renaissance in main group chemistry can tell us one lesson. And that is as chemists, when we see rules in textbooks, maybe we should question those rules because if we do question those rules and we prove that they shouldn't be rules, then we can access new areas of chemistry, very interesting new compounds which have potential applications. And if they don't, well, they have enough fundamental interest to keep us excited.
Prof. Jones (Monash University, Australia) talks about new approaches in the field main group chemistry.
10.5446/18730 (DOI)
So you want me to speak about carbene chemistry? Okay, so the history maybe to start. So it begins, the story begins in 1864, so a little bit more than 150 years ago, where our ancestors were trying to make CH2. And to do this, they started from methanol, which is a very simple molecule, and of course if you remove water from methanol, you end up with CH2. And what is interesting, at that time it was not clear that the carbon is always tetravalent, so CH2 could be stable. It turned out it was not stable, of course. And then they realized, in fact, only at the beginning of the 20th century, that carbon cannot have only two neighbors with the remaining lone pairs. And this is a work by Staudinger here in Germany. However, even if the carbines are not stable, people use them a lot. In the 40s, in the 50s, here in Marburg, Merwein, this is a CH in session reaction, so he started from a carbine, a transient carbine, of course, something which has a very, very short life. Let's see, some nanoseconds. And he trapped it inside a CH bond, so this is your CH bond in session. And then some other reactions, just like sacropropanation reactions that you learn, I guess, in the first year of your study. And then in the 60s, chemists tried to visualize these carbines, and to do so, they have to work at very, very low temperature. Of course, if you work at very low temperature, you can stabilize these species, because they cannot move and they cannot meet their neighbor, and so they are kind of stable. And if you work at a few k, let's see, 4k, 10k, you can see the carbines. And then came the work by Eau Fisher in Munich, and he got the Nobel Prize, as you might know, where he was able to stabilize a carbine in the coordination sphere of a metal. So this is the E story. In many, many years, nobody tried to make a stable carbine, or those we tried, like Wenz-Lehken, Germany, in Berlin, failed. And then we came, and by accident, I guess, we made the first stable carbines. What is interesting is, at the end of the 80s, when we did this, everybody thought that carbine were curiosities, just toys. And myself, I was the first guy to say it's a curiosity. And it turned out that in 2010, so that means, what, 20 years later, every year, more than 3,000 papers are published with a carbine. So this is a tremendous expansion of this field. And now, people use carbines in catalysis, transition metal catalysis, organ catalysis, to stabilize reactive species. This is kind of fun. You use a carbine, which was supposed to be a reactive intermediate for more than a century, and now you use this carbine to stabilize other unstable species. So I think it's really fun. You can stabilize radicals, you can stabilize many, many things. So even some applications, some medical applications. So for instance, carbines are used to transport silver in the body. And this is a new important application of carbines. So you think about this, in, let's say, 20 years, we went from laboratory curiosities up to something that you can find everywhere. And there are several types of carbines, and it turned out that in contrast to the first carbine that we prepared in my group, and it turned out not to be useful. Our dwangos carbines, so the NHC, the nitrocyclic carbines, are by far those we are used today. However, there are new generations of carbines coming from different lab, including my lab, of course, which might be more powerful. The problem is that when a community starts to use a tool, everybody uses this tool. And it's very difficult to explain that there are new tools. You know, there is a good example in chemistry, not about carbine, but a perfect example. This is for the polymerization of ethylene, the Zygnonata catalyst, all these kind of things. There are new, extremely powerful catalysts which are on the market, but no company at all wants to change a process which is 60 years old. And with a carbine, it's exactly the same thing. In the literature, you have thousands of papers, tens of thousands of papers using nitrocyclic carbines, and they are commercially viable. Nothing else, this is just... they are used because some other people used it. And I think it's really striking when something becomes fashionable, you have X-labs, which goes to this topic. And they don't want to go out the mainstream. Having said that, I do believe that some of the carbines coming from my group right now are by far more interesting for many types of applications. And so what we need is to have two or three groups starting to use them, and then I'm absolutely convinced that hundreds of groups will use them. I mean, this is human being, you know, everybody wants to be in the mainstream. That's the only reason I can see. What is fascinating for me, again, is that carbine chemistry, 20 years ago, was just a fundamental research. And nowadays, you find this everywhere. So this is what is the most important thing. And I think what is really interesting is that 20 years ago, so carbine was supposed to be unstable, and then came the first stable carbine, the second stable carbine, the third stable carbine, and so on and so forth. And nowadays, you realize that many, many, many types of carbines are stable. It seems that once somebody has discovered a type of species, then this species becomes available for everybody, stable for everybody. And as I mentioned at the beginning, carbon is supposed to be tetravalent, and there are some rules. For instance, if you have an SP, hybridized carbon, it's supposed to be linear. This is a case for alkyne, this is a case for allines. Okay, now you can play some tricks. And so for instance, we recently reported what we call bent allines. And the idea is, how is it possible to transform a compound which seems to be rigid, which seems to, which has to be linear, into something which is very flexible? And, which is bent. And once you discover this, then it's easy for everybody to make bent allines. So it seems that you have a kind of energy barrier to find something new, and once this barrier is over, then it's open, and everything seems very simple. I'm sure there are many examples, for instance, rare gases. Okay, for many years, everybody thought, okay, these guys are not reactive at all. And then if somebody finds a reaction with Xenon, whatever, then 1000 of reaction will be discovered. It seems just like if mother nature put a key on a problem, and then when you open the door, the problem is over. And I think it's something which appeared to me really fascinating for scientists to open this door. Think about something else. I mean, the first time somebody walked on the moon, it was something phenomenal, right? And now I'm quite sure that if the US or the Russian or the Japanese, whatever, send a new guy on the moon, maybe two minutes on the TV, maybe three minutes, I don't know. And that's all, just because that has been done. It's open. So I think this is something fascinating for scientists. So carving is just an example. But for me, the most important thing is this. You have a door, you have a key, or you have to find the right key to the door, and then you open the door and the story is over. And my story is over.
Prof. Guy Betrand (University of California, Riverside) talks about the histroy of carbene chemistry, bent allenes and how the discovery of carbenes opened a locked door in chemistry.
10.5446/18705 (DOI)
What made me go into a research area which is far away from synthetic organic chemistry? You may know that I've worked in and done research in synthetic organic chemistry, developing methods for decades. So some people have asked me, what made you go into this completely different area which involves molecular biology, enzymology, and so on? Well I thought about this question and it turns out that the roots can be traced actually to Marblech here. When I came from Bonn to this university in 1980, my colleague, Reinhard Hoffmann, suggested that we offer to the Gesellschaft Deutsche Chemiker, the German Chemical Society, a course on stereoselective synthesis, methods in asymmetric catalysis and stoichiometric reactions. So I thought that's a nice idea and I said yes, I'll do it. And we asked another colleague, Professor Geis, at that time at Dammstadt University, to also participate. And we divided the subjects up. And Professor Geis was responsible for enzymes as catalysts in synthetic organic chemistry. And we offered this course every two years for a whole decade. And we were all present as the speakers, as the three speakers during the whole course. It was always a one-week course, eight hours a day. So I learned a lot myself, not just the participants, which were students, industrial people and so on, from all over Europe. It was really one of the best in German Fortbildungskurse der Gesellschaft Deutsche Chemiker. So I learned something about enzymes and I had previously no knowledge of that subject whatsoever. So I was sensitized. And then in 1994, I read a paper by Chance in Nature entitled DNA Shuffling, written by a molecular biologist by the name of Wim Stemme. But I was curious. I read it. I didn't really understand the details, but it was clear. This has to do with directed evolution. And he was interested in antibiotic resistance, beta-lactamases and so on. So I started to read the literature more on directed evolution. And I read the seminal paper by Francis Arnold at Caltech, written and appeared in 1993. She went through several cycles of mutagenesis in order to increase the stability of a protease. So what does this actually mean? What is directed evolution? That's the second question I have on my middle list here. Everybody knows what evolution in nature is. It's a continuous cycle of mutagenesis, gene mutagenesis, selection. Gene mutagenesis, selection. It's a powerful driving force in nature. Nothing can be understood in biology in the absence of evolution. And it has been the dream of enzymologists and evolutionary researchers to simulate this process in the laboratory. In other words, to perform evolution in the test tube. And this is what Stemme did. This is what Arnold performed. And we read this at a very early stage. And then this posed the question, can we harness this powerful force, namely evolution, put it in the test tube in order to control a parameter which admittedly is not trivial, namely asymmetric catalysis, in the antioselectivity. So let me begin with a cartoon. Here you see our new approach to asymmetric catalysis, directed evolution of inantioselective enzymes. And on the right upper part, you see a circle. It symbolizes a wild type enzyme. In other words, the enzyme that occurs in nature. It has poor selectivity, poor inantioselectivity in the reaction that you or we may be interested in. So we take the gene, the square to the left, which encodes this enzyme and subject it to a gene mutagenesis method. And there are a number of these methods available, which were developed in the 1980s, 1990s even to this day. And such things as error-prone polymerase chain reaction, a shotgun method is the most popular method used to this day. Then we have DNA shuffling, which is a recombinant method. I've mentioned it already. Stemmer, you take a gene or two genes, you slice them enzymatically into pieces, and you reassemble them. So this is simulating sexual evolution. Let me now show you some details what is actually done in the laboratory. Once you perform one of those methods on some gene, which encodes an enzyme of interest to you, you have it in the test tube here, and then you transfer this collection of mutated genes into a bacterial host such as E. coli, and you plate out on agar plates, on many agar plates. Here is symbolically the first plate, and after a while you see little colonies growing, each coming from a single cell, producing a mutant. You collect them, you harvest them, you give them food, and they feel good. You put them individually into the wells of micro-titer plates, and then you suddenly have hundreds and thousands of little factories producing potentially in anti-selective enzymes. We used some of these methods such as error-prone PCR and also DNA shuffling in proof of principle studies using lipases, and we were able to increase the enantioselectivity to a notable degree. If you do something completely new for the first time, it doesn't matter how efficient it is. So, we were not concerned about practical ramifications or efficiency of what we're actually doing. Now the challenge in directed evolution is to develop methods which allow you to probe protein sequence space efficiently. What do I mean by that? Consider for example an enzyme composed of 300 amino acids. If you introduce one mutation, one point mutation randomly, everywhere, at every position, and remember there are 20 building blocks, 20 different amino acids, you can calculate there about 7,000 different mutants possible. If you introduce two mutations simultaneously, this jumps up to about 15 million and 3, 30 billion, which are impossible to screen. And remember, how are you going to screen even a thousand or 5,000 samples for enantiomeric purity? So this was one challenge, to develop high throughput methods for EE determination. I will not go into any details there. The other intellectual challenge is what I just addressed, namely methodology development. We published our proof of principle paper, others joined us, industrial companies also used this method to create new catalysts for asymmetric catalysis, but we were not happy really with the methods. So let me now show you how to beat the so-called numbers problem in directed evolution, how to make small libraries, high quality libraries with less efforts. That is the challenge. And our answer is shown on this slide here. We call it iterative saturation mutagenesis, ISM. You first make a decision regarding sites in the enzyme where you want to randomize. So saturation mutagenesis is a method that I have not introduced yet, but it is as follows. You can choose, for example, one position anywhere in the enzyme. You define it and introduce randomly all 20 proteogenic amino acids there, and you get a library of all mutants. There are 20 then. If you saturate or randomize, as people call it, two position simultaneously, it's 20 to the power of 2, 400, and so on. The amino acid position site would be 8,000. So you need a decision where to randomize. So it's knowledge-driven, mechanism-driven, structure-driven, and we have developed criteria with which you can choose the right appropriate positions where to randomize. But let's say, and I will show you in a minute what those criteria are, let's say you have analyzed your system and there are four sites, A, B, C, D, and just to make it less abstract, let's say A and B are sites composed of two amino acid positions and C and D, three amino acid positions. And you can see on the slide that in the case of three, as I said, there are 8,000 mutants. It doesn't mean that you harvest the first 8,000 bacterial colonies and you have all of those. There's a statistical argument and statistics according to which you have to do so-called oversampling. So you have to harvest many, many, many more if you insist on really screening all of those 8,000, for example. But if you look at the scheme, it looks a little complicated. We make four libraries, screen them for an antio-selectivity, and put the winner, as you can see in each case, A, B, C, and D. Then the iterativity comes into play. We have changed the catalyst, the enzyme, structurally. And we take the gene that encodes this mutant enzyme and then visit the other sites, in the case of A, B, C, and D. And then we continue until we have visited all four sites, and then it converges. So when we set up this scheme, we did not dream how successful it would be. This is the most efficient way to do directed evolution. Now let's look at the criteria. We have to make the decisive choice where to randomize. So it is a combination of, let's say, rational design and randomization. And that's shown on the next slide. We call this combinatorial active site saturation test, CAST. So it's a nice acronym, CASTing, for substrate scope and in antio-selectivity. And you simply look at the binding pocket and see which amino acid residues align this binding pocket. Those are our A, B, C, and Ds. And then we perform this systematically in the sense of iterativity, according to the scheme that I showed you on the last slide. Now let's briefly look at our first example. And that's shown on the next slide here. This concerns the so-called kinetic resolution of a racemate, in this case an epoxide, and we use an epoxide hydrolase. So it's a racemate, one-to-one mixture of R and S, and we only want one of the enantiomys to react to the diol, and that would leave, after 50% conversion, the other starting material untouched, and those can then be separated. So we performed the CAST analysis based on the X-ray structure, and we came up with six sites, A, B, C, D, E, and F, each composed of two or three amino acid positions. On the right side you can see a cartoon which picture these six sites. So the student performed the saturation mutagenesis six times, got six libraries, and the best hit came out of library B. And this was then used as a starting point. People call it a template to visit another site. So the question is, where should you go? If you remember the dendritic scheme was somewhat complicated. Today we know it really doesn't matter which pathway you take. But on the next slide you see the result. We have the wild type. This is the selectivity factor, the relative rate of one with respect to the other enantiomer, and you see the best came out of B. This has a selectivity factor of 40%. Then the student visited C, D, F, and E, and we came up with a mutant LW202, which has a selectivity factor of 115. And we only had to look at 20,000 reactions, which happens to be the same number that we had already screened using the old strategies, error-prone PCR, in an older study. But the results there were very, very poor. We could only double it. And here we have multiplied it by a factor of 20 or 25. An interesting question concerns the problem of identifying the reason for enhanced enantioselectivity. So we just recently published a paper concerning the question in this specific case. And it's a long story. You can read it in the Journal of American Chemical Society 2009, just a few months ago. But I only want to show you one little thing that is very important, namely the x-ray structure of the wild type, the starting enzyme, and the x-ray structure of the evolved enantioselective one. If you look at the two x-ray structures, they're essentially identical. But if you zoom in into the winding pocket, and let's take a look at that, here on the left you see the wild type. And these are our sites. And here's the winding pocket. It's kind of a narrow tunnel. And if you just take a quick look on the right, you see the best mutant LW202. And I think you'll admit it's completely different. So the shape of the winding pocket has been changed to such an extent that only one enantiomer fits in and reacts. The other one does not react and does not fit in into the winding pocket. I could show you now a movie of this whole thing. I'm going to leave that off. And we've done kinetics and many other experiments. You can find details here. So now I'm more or less at the end of this video. I hope that you enjoyed this little adventure accompanying me. I hope the basic principles are clear now. The ramifications are far-reaching, and it's not just in antiselectivity, substrate scope, but we can also handle thermostability and stability against hostile solvents. So those are the most important parameters for real applications. Those are the traditional, the historical limitations of enzymes as catalysts in biotechnology and also in synthetic organic chemistry. I hope you enjoyed this little trip.
Prof. Reetz (MPI Mühlheim) talks about the synthesis of enantionselective enzymes through the means of Directed Evolution.
10.5446/18685 (DOI)
Can you just introduce yourself? Oh, yeah. OK. So I'm Ryan Lorti. I'm a member of the GNOME project for a long time. And a small while ago undertook improving the relationships of GNOME and FreeBSD. So yeah, I guess that's the introduction, in fact. So I noticed that there were two big problems. And I was sort of approaching it from the problem that GNOME had. But there was also a problem in FreeBSD, which was that GNOME on FreeBSD was not happy. There was version two in the ports. Like version three came out. And it was many, many years. And there was still version two in there. And it was kind of a bad situation, because GNOME is big and unwieldy, and it tends to touch a lot of things. And every time a new version came out, it would be like, OK, what's broken now? OK, well, everything's broken. So we've got to file a bunch of bugs, get it fixed upstream. Hopefully, it'll be fixed in the next version. OK, great, it's fixed. But now a whole bunch of other stuff is broken. And this was just because if you're doing something only once every six months or even sometimes longer, it's not fast enough to catch the problems as they're happening. On the other side, I maintain G-Lib, which is sort of like the base library of GNOME. And it's kind of a mess when it comes to portability. We have so much stacks of if def code around certain things like stat and stat VFS and stat FS. And just basic stuff like, how do I figure out how many free blocks there are in this file system? There are about eight different ways of doing this, depending on the operating system you're on, like old versions of Solaris and stuff. I don't even know how much of that code we use. This is a problem for me, because I kind of want to rip it out. And I'm kind of wondering, well, if I rip it out, is somebody going to get angry at me? Am I going to get hate mail? I don't know. So wouldn't it be nice if we had an actual set of supported systems that we knew that we supported, and we had a way of testing those systems? So we sort of decided in G-Lib. And if G-Lib does something and says, this is the way it is, it's pretty much affecting all of GNOME, because you need G-Lib to run GNOME. So we said, we're going to move away from this idea that we theoretically support anyone who wants to come to us. And we're going to try and nail down more concretely. An idea of we support these platforms, because we care about them, and we know we support them because we actually tested it. It's not just theoretical, hey, somebody sent me a patch a long time ago. Maybe it's still working. It's something that we're testing regularly. So we have a list of platforms that we actually target. And we actually do test regularly on most of them. And we've gotten a little bit better about saying no to people who just show up with one-time patches for obscure operating systems, unless they can commit that they're going to stay around and keep updating that. Yeah? What is the list long? Can you tell us what else is on it for the list of systems? Yeah. That's what it looks like. These are sort of like our first class candidates here. They're the ones that are really getting done on basically a daily basis. These are the ones that we support, but it's not as good. And we would prefer if people could do more stuff about that. Yeah, so that's it. OK, thank you. And we also have tool chain requirements, where we say your compiler must support these features as well, even if they're not necessarily mandated by POSIX. And that page I just flashed up really goes into details about why we chose this approach. And one of the things that that page mentions is that you can't build something like an O on top of POSIX. You just can't do it. I mean, POSIX is great. I love it. It's very well-written. It tends not to be too ambiguous. You get a very good idea of what you're allowed to do by POSIX, but it's simply not enough. Really simple things like power management, completely silent on this topic. How do I change the system time zone? I don't know. Left to the vendor to decide. POSIX is really minimal. We need more than that. And what I was saying before, you can talk about, OK, we're adhering to POSIX, so it should work on all systems. But in practice, that doesn't work either. Because there's always something that, even if you think you followed the spec, you actually didn't. And you're using some specific behavior that is only on Linux or something. And you don't know that when you run it on other systems, even if they are POSIX compliant, that it's going to break horribly. And unless you're actually testing on those systems, you're never going to find out about that. And that brings me to an important point, which is that nobody in GNOME like hates FreeBSD or any of the BSDs. There's this idea going around that it's really true. But it's not. I mean, some people are like BSD friendly. Some people couldn't give a damn, I think, is probably a good way of describing it. Just like, OK, whatever, you're over there. But nobody is actively hostile. And even like the people who don't really care, if you show up with patches in hand, they're generally pretty good about applying them. It really is just a matter of we make mistakes. What can we do? We all are running Linux in GNOME for the most part. When we write code, occasionally we depend on a feature. We don't realize this is in POSIX. Honest mistake. And it's really good. And we appreciate it if people are calling us out on this stuff. But there is another kind of issue, which the stuff I was saying about stuff that's not in POSIX. In GNOME, we have this approach where we call it draining the swamp. And basically, it's, OK, everything's crap. We need to build some feature. We can't do it with what we have here. So we need to plumb through the whole platform layer in order to get what we want. And that often means that we're writing new stuff, like things like systemD, in fact, coming originally from someone quite involved in the GNOME project. And these things are doing useful stuff for us, like what I mentioned before about an API for changing the time zone or the date and time or changing the host name or stuff like this. Before in GNOME, these were parts of GNOME. And they were done in system-dependent ways in this weird back end with lots of FDF that nobody ever looked at. And it was running as root. And we're happy not to have that in our code if we can depend on a feature that's provided by the operating system. So why it's interesting for me to focus on the BSDs a little bit more is because from the standpoint of GLib, sure, we have to support Windows and Mac and all that. But GNOME itself doesn't target Windows and Mac. So if not operating systems like BSD, GNOME as a desktop is basically a Linux-only affair. So having another platform that we can target is quite useful for us in terms of going from it's theoretically portable to it's actually portable. And even just having just one like pre-BSD, for example, you find really a lot of the portability issues just because there's a different compiler. So I mean, Clang and GCC kind of in some ways are the same compiler because they support all the same features. But you do find a lot more bugs just by having one other C compiler. And then after that one extra, you get maybe three C compilers, you get diminishing returns here quite quickly. Same with Live-C kernel. Anything else, just having that one extra is going to find really a lot of portability issues because any Linux specific feature, instantly you find out about it. In addition to portability issues, do you find that you improve your code overall by having a hybrid and having it work on more than one compiler? It's an interesting question. Sometimes that's true. It's actually often false. Because so to give an example of something that we recently added support for in GLib, that depends on a GCC feature. And therefore, we cannot use it in GLib itself, there's a thing called a cleanup attribute in GCC, which is phenomenally useful. And it allows you to define an attribute on a local variable that when it goes out of scope, it automatically gets a function called on it that will free it or whatever. This could phenomenally improve our code quality. And we can't use it because we have to be portable, right? We often run into things like this, that if only we had this compiler feature everywhere, we could do this thing that would make our code really a lot more readable, a lot better quality. Yeah. Sometimes you're forced to go to the lowest common denominator. Yeah, and that does hurt code quality often. I mean, you are in a RIA GCC, or is RIA need? And you're the supported compiler because Clam has the cleanup stuff? Yeah. Again, in lots of GNOME projects, use the cleanup stuff. Oh my goodness. Yeah, sorry, I'm giving a talk. Lots of GNOME stuff that only targets Linux and free GSC will use that stuff. But in GLib, we also have to target macOS and Windows. Well, macOS, again, wouldn't be a problem. But we target Visual Studio, so it's a no go. And in theory, if you want to use the Intel compiler, the Sun compiler, that stuff is still supported in GLib. So yeah, we can't rely on that feature, which is unfortunate, and I'd like to. But yeah, often if you try and support multiple systems, it does force you to think in a more abstract way. And sometimes that leads to improvements. But I'd actually argue that honestly, just being able to do it once and do it a certain way is almost always better from the code being cleaner, actually. Fewer abstractions, I think, can actually be a good thing, just because it's less code overall, right? So yeah, so a little more detail about what I was talking about earlier, sort of going back over to the free VSD side of things now. There was GNOME 2 in ports forever after GNOME 3 was released. The uphill battle I was talking about with the new versions, it was hard to stay up to date with that. So a while ago, I approached the GNOME VBSC team and said, hey, there's a single JH build. I think you guys should run it. And what JH build is for those who don't know is it's sort of like a meta build system. GNOME is really very large. It's like if you talk about it and it's closely related external dependencies, you're talking like 160 tar balls, more or less. And building all of those is a bit of a pain. So we have JH build, which does that for us. And it'll go and download, and then it'll configure it, make, make, install. And it does something cool. You can install it in your home directory. And then it'll set up in a bunch of environment variables, like LD library path and all that stuff, so that when you build the next module, which depends on that first one, it can find the include files and the libraries and all that in your home directory. So you can do this without messing up your system. And one of the, yeah, sure. I was just going to ask what JH stands for. James Henstridge. He's the guy that wrote it. OK. Little bit of a history there. He's a cool guy. One of the really cool things about JH build, though, is that it's default mode of operation, is that it takes all of the software out of Git. And it takes it out of Gitmaster. So if you want to know what's in GNOME like today, as of the thing that got committed five minutes ago, you can compile it with JH build and you're going to get that. The reason that's really cool is you can really keep track of any issues that might be sneaking into the code at any given time. And certainly, if we're doing that on FreeBSD, then we get a really good idea of any potential portability issues that are sneaking into the code at any given time, which is wonderful. And this is where that whole Stitch in Time stage nine thing comes in. When you find somebody did a commit that caused a problem, or you find maybe we're doing it once a day, it worked yesterday. It doesn't work today. So maybe there's like 10 commits. Yeah, this looks like the one that caused the problem. I can email that person. It's fresh in their mind. I can file a bug about it. They're probably going to have some idea of what they can do to fix it at that point. Versus if I'm waiting six months or a year later when the release comes, it's not working. Oh, geez. OK, what changed? I don't even know how to track this down. Who's fault it is? Even if I do figure out what commit it is, I go to that person. OK, do you remember this thing you did a year ago? Yeah, it's causing me trouble now. Can you reevaluate that? Well, no, sorry. I'm like, I'm totally something else now. I can't do that for you. So the FreeBSDEGENOME team, Cope has been a big part of this, is basically every day running JH build on FreeBSDE now, at least once a day. And Ting Wei Lan, also another person who's doing this very actively. And when stuff breaks, they're filing bugs upstream right away. And this isn't just portability bugs. I mean, we have continuous builders as well running on Linux in Genome. But sometimes the BSD guys are the first ones who find problems where it's just like, did you actually test this before you committed it? And these bugs are going upstream too. And this has really resulted in a fundamental shift in the relationship, I think, between FreeBSDE and Genome. Because when Genome contributors and Genome hackers and maintainers have really become accustomed to receiving bug reports from FreeBSDE people now. And they're pretty good, in my opinion, about replying to them. We have a Wiki page, actually. I should pull that up. It sort of details everything that's been going on. And as we've done, it's a list of fixed bugs. Yeah, so this page is huge. I mean, you see the scroll bar there. It's really impressive. Like, this is, if you want to get it set up, you've got to do all this stuff. Yeah, these are the outstanding issues in Genome, which is some low priority stuff. But the most impressive thing is like this, are the issues that we've solved. And that goes on. It's just a huge number of patches that were sent upstream and applied by upstream happily for addressing FreeBSDE portability issues. And that list is growing all the time. One patch rejected upstream, which I think is pretty cool. That was a patch against Udev. And I talked to the maintainer about that recently, and he might change his mind. So the response from upstream has been great about that. Yeah, so that's really good. Yeah, as I said, you all saw the list. It's pretty good. And it's just getting bigger. So what do we do? So one of the things I was mentioning is about how POSIX is not good enough for us. So we're doing things like log-in D for power management and stuff, and figuring out which session is the active one. And we're depending on things like Udev for enumerating hardware and all that. And this is my opinion. And it's something that I've talked about with the release team of Genome. And they're sort of on board with this idea. The approach to portability that I like is that you depend on an API, not a particular piece of software in name. And when I say an API, I mean a header file that you include that has a certain set of symbols of a given name in it, or a PC file, a packaging fig, that you link into your project, and again, gets you a certain set of symbols that you can use in the project. So this is for things like, say I wanted to enumerate all the webcams on the system, it would be good if I had just a single PC file that I could include that got me an API that looked a certain way that I could use. And I could do that everywhere. That would be really great for me from the upstream standpoint. As it turns out, we have this. It's Udev on Linux right now. It's not the most portable API. There's nothing stopping people from implementing it, per se, but I can understand why people wouldn't want to, because it's very Linux specific. It's putting stuff from the kernel in there. And other things like the log-in D APIs or the other system D service APIs, I think they're really good APIs. And I wish people would just implement them. And it would never be our intent to depend on system D, for example, but to depend on anyone who's willing to provide these APIs. And in my opinion, that is sort of the best approach to portability. Because for lack of that, I'm basically having to write different back ends to put if-desk in my code and stuff like that. So we do need to depend on more than just POSIX. But in a way, POSIX kind of gives us a good idea about, well, why don't we write a spec for what we expect from the operating system? And then everybody can implement that spec. And one of the cool things about a lot of system D is it has a wiki page where it says, here's the various APIs that we implement in system D. And here are the ones that we think should be reasonably portable to other systems. And by the way, here's where they are documented. And this documentation is firm. And we don't plan on changing the API. And on that topic is, I think system D. I mean, if you look at it, we sort of did Hal and DeviceKit. And we did DevFS we had. We're getting to a place. And I mean, before, as I said, we sort of started out in this damn point of GNOME, we need to do stuff. How do we get this stuff done? It all sucks. What can we really do about it? So we started doing things. And it was kind of at a point where D-Bus was new. We were kind of getting a grasp of how we write good D-Bus services. Hal did what we needed it to do for a while. But it was clearly sort of an experimental foray into this new world. And it wasn't great. And similarly, ConsoleKit, it did what it needed to do, but it wasn't great. So we're sort of now at a point where we have, and really a lot of this is being done under the banner of system D, we have for the first time something that people actually feel good about. And that is good. And I think that it's going to be around for a while. So when BSC people ask me, well, we did the Hal thing. And then you guys, like, ripped the carpet out under us. Are you going to do the same thing with system D? I don't think so. I think system D is here for a while. So basic stuff about just what I think anybody who's interested in any kind of large software project porting it to BSD is get upstream. If you're maintaining patches in the port's tree, you're probably doing something wrong. But not always. Because get upstream first. This is something that I've been involved in Ubuntu for a long time. And Ubuntu is definitely a project that has an interesting history with the GNOME project as well. It hasn't always been friendly. But we sort of developed a certain protocol of what is considered good behavior for interacting with upstream projects. And upstream first is basically the number one principle. If you've got a problem and you need to fix that with a patch, you're fine. But you need to fix that with a patch. Your first step should be getting that patch upstream. Where the first comes in is that, OK, so you sent it upstream, and now they're ignoring you because the maintainer is busy or whatever. By all means, put that in the port's tree as a patch against the package. But when you do that, make sure at the top of it, as a description, as part of it, provide a link to the bug. Explain why it's needed. And maybe even give them a week to see if they can get it in. Maybe they change it a little to make it a little bit better in some way that only they would know how to do because it's their project. You get a better patch by doing that. So I'm kind of getting to the end of the main thesis of this thing. So I'm just going to throw on a bunch of points here. I kind of thought the audience would be a bit bigger. I have to admit. My wish list items for things that could be better in DSD. Getting this stuff implemented would be really nice. I baptized. Yeah, KQ64 would be good, mostly because it would let us set KQ events for absolute monotonic time with microsecond or nanosecond accuracy. Because if you only have 32-bit counters and you want microsecond or nanosecond accuracy against the monotonic time, that stops working after some number of days of uptime. So you basically need the 64-bit for that. Yeah, this sucks. Please, some kind of a file notification API would be good. It does not suck. It's not done for what you want. So we need something new along with it. It sucks that I have to use this. Yeah, that's it. Yeah. But on that topic, if we could make this a little bit better, I would ask for the implementation of this. Which is something. Yeah, this macOS has this. And it makes a Filt Vnode slightly less horrible. Basically, you can open a file like this, just like Oread Only or Oread Write, you say OEVT only. And you can't do anything with the file except stick it in a KQ. And then you get notifications about changes. And the biggest difference there is that you can actually unmount the file system that you did this on. So we can watch for files on removable media. Yeah, because right now we have to pull, basically. So that would be good. Get on the recording. If that said that, it would be easy to do. Yeah. So Baptiste says that this will be very easy to implement. And he plans on doing it next week. No. No? Tonight? Oh, even better. Just some miscellaneous stuff. Some of it's on that web page. LibTool is still setting our path in inappropriate ways on BSD. It would be cool if it could stop doing that. But it just takes that to the lower end. So you stopped installing LA files for the most part. You're not installing LA files most of the time, or if you install them, they are empty? But our path is still getting set, which is annoying. I think we changed that. OK. Just to check. But the appropriate way. Yeah. Well, if you look in your JH build install directory, just look on the libraries and see if they have an R path. And if they don't, then it's good. But if they do, and I think that they do. This is another huge one. As I said, what we consider an API is often the package config file. And there's a lot of things in the base system in BSD that implement APIs that we would expect have package config files, and they don't. So we have to hack, configure arguments in order to get them registered. Yeah, but usually some of those, because I think I listen to help, not in really yet. But I think we are managing all the missing PC files. So basically, LibRKai, everything which is things like ZLib and stuff. OK, that's good. I see MA and stuff like this. Is it? Sorry? Except here. Open as a no-suit. I'll open it, though. We don't have the And since you're working on, what's the one you're working on tonight? You're working on the OEVT only tonight. So tomorrow night, you have to work on this one. That'd be really cool. If I could ask, which package do I have to install in order to get this file on my system? This one, we have it in the total is for Y, and it's not implemented for one written write note, because the database would be too huge. So the idea is to go through the A-V-T way and having a separate database so that you can have an accurate amount of A-V-T file. Actually, package repo does. Is I really able to create that, just a matter of instrumenting that all the stuff in the hood is already available somewhere? Yeah, so this would help a lot, because JHBuild has a feature called SysSteps. And as I said, GNOME has a ton of dependencies. And maybe you saw at the start of that wiki page, it said, type PKG install this. And JHBuild is actually able to figure out that itself if it has some way of saying which package provides this file. So that'd be cool. And here's one I just turned in. That's pretty funny. You have this libg-aum in FreeDSD. And most functions in it are a g-aum underscore. Some are gctl underscore, and some are g underscore. And there's another popular library that also uses g underscore. And yeah, we both have this symbol. So no GNOME program can look against libg-aum, which is a problem for gtop. So in some ways, I kind of wanted to start a conversation. I'm pretty much done with the slides now. If anybody wants to talk about experience they've had, or like, I approach the GNOME developer once and this happened, and can you help me out with this, that'd be pretty cool. I just like the idea that it sounds like the project initially didn't really. I mean, there's tradition in rivalry between projects, but it's usually over the long run. It's usually a sibling rivalry, where they're really involved in the long run. Yeah, like GNOME, Katie, or whatever. Yeah. Yeah. But I like the idea that once they found there was valuable information in the form of PRs, or bug reports, in that case, from another project, it probably didn't matter to them what other project it was, as long as the information was valuable. And it's like, hey, it turns out they're our friends. Yeah, and my approach to that has always been, well, at least my new approach since a couple of years ago when I decided, let's make everybody happy thing just isn't working anymore. If somebody's willing to show up and do it regularly, and it's certainly if it's going to improve my code quality, by all means. And if I've done something that's Linux specific, and there's a better POSIX equivalent, and there's nothing worse about it, then by all means I'm happy to use that one. And I think almost anybody would do this. We get things like, oh, you're assuming a bash feature in DNSH, you can't do that. Everybody's happy to fix those kinds of bugs. Right, but I think there are a lot of people who think that, oh, there another project, they're not going to want our input anyway. Yeah, that's absolutely. Right, but I think that is a common misperception that should be directed. Yeah, and if. I mean, Linux said openly that he didn't care about anything from the bottom of his head. So, Leonard, yeah. That's not entirely true. He said that he didn't care. He doesn't want to implement compatibility in such a system for that, but it makes sense, because it's mainly some of this stuff. I mean, our init doesn't have any compatibility with something else. The problem after that is a stock of what system it does. All of these parts couldn't be portable. But the reason why they don't want to get any patches from the BSD makes, in my opinion, sense, because we don't want Linux patches in our init. To tell you the truth, having talked to Leonard a lot on this topic, he is actually actively anti-BSD. He is one of the few people I know who I would actually say. He wishes that all of the BSDs would just die so that everybody focuses on Linux instead. But he is the only one I know, like the singular person I know that has this extreme attitude on the topic. So some of the CIO might think he would almost like this. Maybe you could use the BST gap. No, I don't think he would come. I don't think he would come. Bastion, I think we have this Ganomo-S idea, which is we want to build this whole operating system, start from Linux and all this stuff all the way up. And Ganomo-S to different people means a lot of different things. And one thing it was was this continuous integration. We would build this image for virtual machines that we could test and do it every day. Actually, we do it on every commit. And this is a part of what that was. But what it mostly was, in my opinion, is just a list of things that you have to have in your system in order for it to be considered a Ganomo-S. This has to work properly. This has to work properly. We need this. And I would say that free BSD doesn't meet those criteria, but only because you don't have things like network manager. And if you got all of these things and it was working at the same level, I wouldn't necessarily say that this couldn't be Ganomo-S in the same sense, even. One of the things I really like to see the GNOME project do is when they rely on something which is not directly GNOME project like Logizy or network manager or whatever, that they do actually list the actual API from it they rely on so that if we are to create some kind of wrapper on top of our own libraries and provide the same API, we can have at least the first level of subset API we need to do. And probably later, some people will be improving it. But having this list, I mean, following it, for example, I'm not sure. I think you don't rely on that many API. Yeah, so this is a conversation I've had with the SystemD guys a couple of times. And in fact, that one rejected patch that was on my Wiki page is basically this. And this goes back to what I was saying that, for me, an API is a package config file. And saying that, OK, you implement this API and you provide this package config file, but it only implements a subset of the APIs, to me, that's never going to cut it. I don't like that. What I would rather, and that's what this bug was about, is if there's something like Logindy that has a whole bunch of things in one API, a lot of them not related to one another, well, then really maybe that's three APIs. And we're talking D-Bus there, so it doesn't make as much sense. But in this case, it was Udev. And you had all the Udev APIs. And then separately, you had this hwdb thing where you could look up PCI IDs and stuff like that. And this was all grouped together under a single API, one package config file. And we wanted to just use the hardware dblookup, which in no way depended on anything in the kernel. It was basically a hash table lookup in a database file. There's no reason that couldn't have been separate. And having that as a separate package fig and therefore a separate API, in my opinion, is something I wanted. Yeah, but what I mean by that is we have started for Xorg and we're going to extend it for a lot of things, a project libdev queue, which current devices basically. So the idea is to provide an equivalent high-level API that you can get through Udev or Udev like stuff. And if you want to do a priority on what kind of query you want to do first, then if you have a list of what you expect from Udev extracting the Wikipedia or whatever, then we know that probably the priority is to how concurrent this kind of hardware or this kind of hardware events for this kind of stuff. And so that we can have something which is good enough for node to be able to use along with. But is your intent to provide like a Udev API or to provide an equivalent API? Well, the goal of the Dvd queue is to have to provide their own API and back end for basically anything. So the goal is to be able to unify all non-vinducts of writing systems that right now doesn't have a Udev API. They want to get just plug into our version of the same kind of primary so that if you don't know, you just have to. Yeah, so in my opinion, this is unacceptable, in fact. I don't want a situation where normal applications are expected to deal with one API or another. This is really a good backend that you have on the hour library. Something that I think would be far more likely, and people would find to be more acceptable, is that we put something in GIO, for example, which wraps both of them and provides an even higher level and nicely geobjectified API. And we deal with the abstraction in GLib. Because I really don't consider it in any way appropriate that normal applications should have to do this. Yeah, the truth is to me. But anyway, I need to know what GIO in that case would expect from the operating system. So that when I'm on my library, I know that my priority is on this, this, this. This is cool, but no one needs this stuff for portability yet. Yeah, and that's something that I wanted to work on. It's something that's on the wakey page, is something we should get this. But it's not something that I personally have a lot of time to work on right now. I got pulled into a lot of stuff like with application confinement stuff, which is pretty interesting work too. But if somebody wanted to undertake a project of making a high level wrapper in GIO that would wrap Udev and also equally well wrap whatever BSD API is going to be forthcoming, that would be a project I'd be very much interested in. And that's the kind of thing that GNOME people would even like, because Udev, it gets the job done. It's not a great API, right? So if they could have a nicer API that's easier for them to use and be more portable at the same time, that's just a win for everybody. Yeah, because we just cannot create the Udev API because it exposes some internal links. Yeah, insert weird string here. So there's all the system stuff which is exposed, and we can't explore the same thing. So you will need it. If you want probably to do this on the BSD, you will need anyway to get your abstraction on GD for that. Yeah, as far as I see it, that would be pretty much the only acceptable way forward for GNOME. Having any application if deaf or whatever would not be good. For us, it even better because even things which are relying on GD, but not GNOME, we can just talk to them and say, use your abstraction. We know that we just have to focus on GD to get the portability. Yeah, so if anybody wanted to work on this kind of thing, I'd really be happy to mentor this project and say, OK, this is what I think the API should look like, and then go and do it. I'd be happy about that. Could it be a sub-ref course for the next year? Yeah, it could be. And again, for LoginD, there's a lot of APIs in LoginD that we need to use. But again, it's only a subset of some of them. And exposing that subset in a nice way through G object could also be something that's appealing. Although LoginD has a pretty decent API already. But there are some things like it makes you deal directly with file descriptors for the suspense stuff, and probably most people would rather deal with the G object instead, for example. Cool. Thank you for coming. Oh, sorry. Yeah. Have you Germany thoughts on the different file system event notification APIs that exist? Have they're? Yeah, Apple does it best, I think. Even I Notify is pretty crappy. I hate I Notify. But even I Notify is like a world better than what is available in previous data. Yeah, so we're looking at, on behalf of the database accrediting world, looking at a fund project to handle something like I Notify. Yeah, I hear there is an idea that instead of queuing up an FD inside of KQ, you could tell it about an I node instead, right? Well, there's a whole bunch of different ideas, but I'm just curious if we're trying to figure out what the best method for it would be. And an option on the table is we'll make an FS events compatible database. Yeah, FS events is good for most high level things, I'd say. But it doesn't work the way that we kind of have our API working in Glib, in fact. It's more like what changed since where Glib is more about online all the time. And it's not interested in something that happened when the system was down, because the API just doesn't expose this concept. I'd say if I were to tell you the things that I don't like about I Notify, for example, one of them would be that it's extremely difficult to know which file I'm actually monitoring. Because if you imagine, I think about race conditions a lot. And if I imagine I have somebody basically actively attacking my algorithm to monitor files, I could imagine I get into a situation where I have a bunch of files being renamed. I notify a file named A, which then gets renamed out from under me. And then I stat the file. And even if I stat the file on both sides of telling I Notify, watch that file, I could think that I'm monitoring an I node that's different from the one that I Notify actually caught. And that disturbs me a lot. And then it's moved. And now I don't know, OK, so it's moved. So the thing that I think I'm watching is not actually the thing that I'm watching. So you get into situations where you have to watch the parent directory to make sure that no moves occurred on it. And then you have to watch a parent directory, parent directory. This is a bit of a mess. I mean, in a certain sense, the KQ thing's a little bit better that way, because at least I have the FD in hand. And then I can put that in KQ, and I can F stat the FD. So at least I know that it's the same thing. So that's a little bit nice, I guess. But without EVT only, having that FD open is also a risk. Because even if I only have it open for a millisecond, if somebody tries to unmount that file system at that exact point, it's not going to wait for me to close it. It's just going to tell them, no, I can't. And that's a problem, too. Yeah. OK. Yeah. Yeah, file system notification is something I thought a bit about. Maybe we could talk about that after. Sure. OK, thank you. Thank you all for coming.
BSD porters have always struggled with portability of software written by Linux users and never tested elsewhere. GNOME has been particularly difficult. New releases would come with new headaches, every six months. By the time the issues were addressed and fixed upstream, a new release would be out with new issues. In 2014, the FreeBSD GNOME Project changed their approach. jhbuild is now building the full GNOME stack on FreeBSD systems, at least twice daily, directly out of upstream git master. When portability issues creep in, they are addressed immediately — often with patches going upstream the same day. When it comes time to build ports from release tarballs, there are no surprises. A direct result of this effort has been two on-time releases of GNOME (3.12 and 3.14) in FreeBSD and GNOME 3 finally landing in the official ports collection. This talk will discuss what was done and how it changed the relationship of the FreeBSD and GNOME projects as well as discussing important issues going forward.
10.5446/18683 (DOI)
Eryd ein hynny, Garwch yn Lothen<|ml|><|transcribe|> H Gwbod a Yr Gwylae Athai byddiwedddon Cweithio'r Aelod yng Nggrifenni Gwyl yn bân oedd ymd feddwl yna cael gwyfyr ar gy lawd mewn brun i Eryd Eyna'r arddaeth smile'r ymateb yn wildio..? Cyrranbryd... ar Commons, ac fy<|az|><|transcribe|> r système fy Poble fy�w inform manage. Jion ashes defeated j chaimіhadi Lwen de bl into L kolu Ernan Dyn Get YouTube Felly, mae'r FFBSD wedi'i gweithio'r clang yn FFBSD 10, y cwm yn ymdweud. Felly, mae'r cyfnodd yn ddod i'r cyfnodd. Felly, mae'n rhaid i'r rhaid i'r cyfnodd, mae'n rhaid i'r rhaid i'r rhaid i'r cyfnodd byd yn clang. Mae'r darleniaeth arcon nhw ymlaen. Felly, mae'r mois iawn maen nhw mewn ymlaen. Mae'r cyfnodd gyntaf witnesses ac n 그렇죠en a hoddiant. Mae'r cyfnodd ynERSol yn holl adreio gwahodd yn medduniowingol Dyna'r pan-gweithio fod 10 o dychydigledaging. ac ni gefnladwys yn EXTORMATUR superstodol. Fog wnaeth ei echoes head ChamComol i gair hwn, bricksour Purplegur Draws. Mae hyn sy'n mor unllung yn gyоловu cotton o Wrangol ac yn cael dweud fod dnos iawn. mae'n ddweud yw'r functonalau a'r ddweud yw'r ddweud yw'r ddweud yw'r ddweud, ac mae'n ddweud yw'r frebysdau'r ddweud yn y 10 yw'r ddweud yw'r ddweud yn y 2000, 2001. Ond yna'r gweithio'r hynny, ond mae'r ystod yma yw'r ddweud, mae'n ddweud 3 o 4 arall o'r ddweud yw'r ddweud, Former house … released 4.0 or Utility. You use a new and improved method and off we go. After that I started licking into the source code. Projects like covedarity and also Clang offer a static code whistle élriath. Running that again towards code base raises lots of issues.<|te|><|transcribe|> Cym crossed felly rydyn ni'n gweithio i chi'n dweud o'r cyfnod, ond yn dweud o'r ffordd o'r cyfnod, sy'n dweud o'r cyfnod, sy'n dweud o'r cyfnod, sy'n dweud o'r cyfnod o'r cyfnod i'r BSDs i Mac OS. Rydyn ni'n gweithio i'r unrhyw ymddangos, sy'n dweud o'r rhaid o'r rhaid o'r ddechrau o'r ddechrau'r cyfnod. Felly, mae'n dweud o'r cyfnod yn ymddangos. Felly, mae'n dweud o'r cyfnod i eraill, ond y cyfnod yn gweithio'r cyfnod yn ymddangos, gyda'r autotool ac elwyrd. Felly, yma'n dweud o'r cyfnod, gyda'r cyfnod, ymddangos eich dweud o'r dweud, dweud o'r cyfnod yw i'r rhaid o'r cyfnod ar gyfnod, a dweud o'r dweud o'r dweud o'r dweud, o'r platformau sy'n cael ei wneud i'r ddweud. Felly, ydych chi'n gweithio'r Linnuqs, sy'n cael ei ddweud i'r SGRL, y ddweud y ddweud yma yn ei ddweud. Felly, ydych chi'n gweithio'r autotools i'r ddweud, mae'r ddweud i'r ddweud yn y cwm yn y ddweud a'u ddweud i'r ddweud i'r ddweud i'r ddweud i'r autotools i'r ddweud. R songs gydach chi'm eich methu am dddangos hors. Dw i ni'n�n mor i ni yw'r drefbydd ac am eu cy rêfen nid o otra autotools a chis ka nhw a rhanom a roedd angen product iechyd, ein gradun ar greu autotools a'r ledrif yn argyfl gydogiad y system phocion ychydig hefyd, felly, gallwch eich holl y cwm yn i ddweud ar deulau honno, dhistoriad hyn.<|cy|><|translate|> So, I lost quite a bit of time due to this but thanks to Darren Chandler from the openBSD project he kind of product me in the right direction to doing things within Aut oportun ICU because literally I should have this up on those but it was a three line check for doing the test the together with the Meghan Dy Llywodraeth wedi gw chu fydde wallpaper diw hornsau ychydig pan Wysbeth 서ryd, ond old Mac OS specifically Mac OS, tiger on PowerPC. Nim eithaf y population felodi y parchment i fwyaf a ni'n Mystery Maidol. Fyddwch chi'n fabad o gy embracingg llthaf², combinations wedi wrinkles yer Caim suspension y Lamech, iawn fanír super draws a natur fiír L tension ac y ll zeb крas. Felly wnawn yn uch Maurice Mac. Mc~] to dw<|ja|><|translate|> Mac Mini to, to dedicate to task and free this poor laptop from, torches of building GCC which would take 48 hours if you want ITC, java support and things like this. British Summer Time, even though this is a fairly small box, it actually still puts out quite a bit of heat trying to get things compiling. Ond nid yw o'r meidio hefyd yn rideun iddyn nhw sefydlau gwahanol y myned confi rydych chi, oedd yn rôl yn went 점d cydwch i ffordd iawn, a everyone yn mewn relaxed.テis iddyn nhw hefyd e'n meddwl enw i ysig ar 180 irzhwnt og hon, a iawn rodd gyffredig 5000 ym mysg werent yn I1 SUPER Lite. Mae'r cyfnodd 15,000 o'r pachegau sydd wedi'i gweithio'r OSEX Tiger. Mae'r 8,500 o'r pachegau sydd gweithio. Mae'r cyfnodd o'r pachegau sydd gweithio'r gweithio'r gweithio'r gweithio'r gweithio. Felly, mae'r cyfnodd yma, y gweithio'r cyfnodd yma, a'r cyfnodd cyfnodd yma, yn ymwneud yn ymwneud y llyfr y Llyfrgell yn ymwyno'r cyfnodd. Felly, ymwneud yn ymwneud, mae'r cyfnodd yma yn ymwneud yn ymwneud, am ymwneud yn ymwneud ymwneud, ac mae'n rhaid o'r cyfnodd yma yn y cyfnodd y pachegau sydd wedi'i gweithio'r OSEX Tiger. Y tro fydd yn ddiwyllio'r cyfnodd yma, y Nes kitchen digan, Apliion mawr bod yn ynno ymd workload o fel porffun yn gyfat shaddown o Дarkfonid Ambraydiant System hon. Er nghymru, yn mo ewch caneb elio diwyliad pwn yn'r cilio, felly riding bike led vid botneddau yn gumd осwn arkerodfanol ddest oldestgell. So wedi sicr yn edrych a wahanion lleol o ba Rajol Weinogydiolaeth, replacewch a pho'r eraill y hayat mor hjion yn siŵr a leol. Bydd o'r diwethaf y Avelliaeth yn hwnnw neu yw'r defnyddio'r 2021 rhaedraeth, Y gystoffi'r bywen yng Ngott iill fierce iawn o???? Dw i ddim ondoaeth y gallwn y buddyn yn scerfod abis adevanderol. 한 adffan d kidnapping yn llaw.<|nn|><|translate|> Yn Mynd Fyrdi, oedd y syniad gen abruptly ar de trap hynny, pr의ll wahanol hynny Iweni rhaidd srady mailズ,нопrando amser acACH ac mae'n gweithio, fel yw'r cyfnodd yn gweithio'r cyfnodd yma, a'r cyfnodd yn gweithio'r 15-yrsio'r cyfnodd yn ymddangos. Mae'n gweithio'r cyfnodd. Mae'r gweithio'r gweithio'r rubi yn gweithio'r cyfnodd i llawer'i amłyguこれdyd amjść a ydw i'w un أن ac eraill yzas. A barthahol cyfnodd mewn mawr. A oedden nhw'n meddwl am winaud o myll y pethet trwy'r dyfau amwisedd. Nid tyfnwys Cade Bot Soldier I'w lid fynd bod yn cyrniol am hynny run math gallwch fan a phoblanodd o gwneud hynny. Rydych chi fod eich adlog a'r modd decharedd o ddisgrif forces myos pe iysan toycol y gy万. Cyn garpur ar wy Jonathan S Як Powl. A dwi fyw 12 a 15 aحi wedi ei gallach y ffordd maes eu tw beetdd ac atledd, Load a hyn nhw fy overhead. Mae'r paramedes i'w Lincar, y version y Lincar yn Mac OS, yn cael ei wneud o'r spesau yn y paradau yn ystod, ond mae'r modern Lincar yn cael. Felly, yna'n ddweud, mae'n ddweud yn cael ei ddweud o'r version. Felly, oherwydd, o'n ddweud o'r ddweud, mae'n ddweud o'r spesau yn y paradau, a ddweud o'r ddweud. Felly, Rooby. Mae'n ddweud o'r Rooby yn ystod o ddweud o'r ddweud o'r ddweud o'r api. Felly, oherwydd, o'r ddweud o'r ddweud o'r ddweud, mae'n ddweud o'r ddweud o'r ddweud. Fel ammol i chi i hyn, byddiwchりwethaf? Felly, mae'rpow chi'n reverdu gwsig goff грiau Ruby was unable to cope with BerkeleyDB and macOS because on macOS the files are split into two separate files whereas Ruby expects it to be one. That was it. There was one comment in the source code buried somewhere in the database module and I lost a lot of time for that and it's very annoying. By the new year I'd managed to actually exceed the number of packages available in package source from the Intel binaries. After that I started looking around at new platforms to play around with on package source. Every geek has a soft spot for BOS. Haiku is still going so I thought I'd play around with that. With Haiku that was really painful and I didn't actually get very far because I'm not sure if this is applicable to BOS but with Haiku there is no notion of multi-user system. It's a single user system and you're automatically the super user. For their file system there is no file system as such. You have packages which contain snippets of a file system and there's a daemon that starts up when the system boots. This daemon takes your packages and union mounts all the packages that you have to form what appears to be a user land. Then you have a piece of writable space on disk which is your home directory. For package source to work you would be basically bootstrapping in your home directory. It was a lot of work to actually try to integrate it into the system. The other problem was that Perl which gets pulled into a build quite early on in the process would not build. It would build but it wouldn't link. The guys have their own package managing system but the way that they've actually implemented all their changes is rather than trying to integrate with what's actually there they've just gone in and started deleting stuff and replacing it. When you're looking at their changes you have these quite extensive diffs that you need to unpick whereas for us in package source we don't touch the Perl code base. At build time we pass a setting style to say build Perl with these settings and the build goes off and does its thing rather than actually modifying the source code to hard code the settings we want to build with. I gave up on this and thought about another platform to apply my skills up.
A year of tinkering with pkgsrc and others As a pkgsrc developer ensuring a tree of previously added software builds correctly across various systems / architectures and as a "developer?" taking an existing project & applying the methodologies learnt from the *BSD project developers to improve the code base. Covering two angles of one problem (software) embarked on someone who is new to it. Almost a year ago I began to revive Darwin/PowerPC support in pkgsrc to allow up to date packages be build on PowerPC based mac's, at the start it was possible to build less than 8500 packages from the tree on OS X Tiger/PowerPC, sevan.mit.edu is about to exceed 11,427 published 32bit packages for the Darwin/x86 (Figures taken from 2014Q3 bulkbuild by Joyent). This talk will some the issues which needed to be tackled & what's yet to come over the next few months to attempt to build as many of the 15000 possible packages available from pkgsrc on this architecture along with expanding the effort to building to 10 different operating systems across 5 architectures. For the programming angle, discuss my work to clear up the coova-chili code base to use the facilities the operating system provides, introduce functionality from the OpenBSD (e.g. strlcpy) and testing building across the BSD's to improve the codebase.
10.5446/18681 (DOI)
I got put it in the last slot because by this time, at least my brain is kind of full and having a sort of hardcore technical talk, you know, just doesn't seem quite right in this time slot. But that's what you've got. So enjoy, I guess. This is not a talk about how you set up or use ZFS. There's tons and tons and tons of talks and blogs and everything else about that. What I actually wanted to know was, like, how does it actually work, like under the hood? And it's a little daunting when you first want to dive into the code because there's about a quarter million lines of code in making up ZFS. And even for me, that's kind of daunting to try and dive into. Luckily, I had the access to Matt Aaron's and all I had to do was take him out to lunch. And he would just draw it all in the back of a napkin and then I just wrote it up and it became Chapter 10 of the book. So, in fact, these slides are drawn straight out of that chapter. So if you want to just, like, skip to the chase, you can just go read Chapter 10 and you can probably do it faster than standing here and listening to me drone on for an hour about it. At any rate, what I want to do is to try and just sort of give you an overview. I don't really have time, believe it or not, to do all of ZFS. So I'm just going to try and hit some of the highlights. Many of you are on the BSD conference circuit. So you've probably heard this at one of the earlier conferences that I gave it at. But even for you, I've mixed it up. I threw out some of the slides that were there before and added some new ones this time. So you can play the game of Guess What Are the New Slides in Kirk's presentation. All right. Oh, come on. My battery died. How can that be? All right. So let's start with an overview. The ZFS is in the class of file systems that we call the non-overwriting file system or copy on write file system if you prefer. The idea is once a block gets written on the disk, we never overwrite it again. If the contents of that thing needs to change, then we are going to make a new copy of it. So in the traditional UFS style overwriting file system, if you change the mode of a file, we read the i-note in, we change the little mode bits in there, and then we write it back on top of the same place on the disk. Whereas in ZFS, we bring it in, we make the change, and now it's going to be written into a new block. And eventually when we take a checkpoint, that will become part of the state of the file system. And so there's the old copy of the i-note and there's the new copy of the i-note. And absent any snapshots, then that old copy can simply be freed up, the block of disk that it's in. If there's a snapshot that still references it, of course, then we can't free it up because the snapshot is still using it. So a lot of the real trickiness of ZFS is keeping track of when it's time to free blocks. And I'll talk about that towards the end of this talk. Another aspect of ZFS and non-overrating file systems in general is the fact that the file system is always consistent. With UFS, there's this period of time where some stuff's been updated and other stuff hasn't. We stage things so that we can always recover the file system, but you still need to run a log or run FSCK or whatever it is to get it back to a completely consistent state. Whereas with the non-overrating style of file system, changes happen in memory and then at some point we decide to take a checkpoint, we write all the new stuff out somewhere, and then the very last step is we update the Uber block, the sort of the super block of the whole ZFS pool, and it's just that right that, where we finally write the very root of the tree, that then takes us from the previous to the new position. But that new position is consistent. So either we haven't written it yet, in which case we have the old consistent file system, where we have written it and we now have the new consistent file system. Now obviously things can happen in between there, and so as you'll see, we have to carry along a log to make sure that we can update that consistent snapshot with the things that have changed since the last checkpoint that we took. Okay, but the state always moves along atomically at each time we take a checkpoint. So we never have to worry about, God forbid, running something like FSCK over the file system, because it's just always consistent. And of course then you say, oh yeah, but if a disk fails or this or that, so I mean there's obviously has to be other levels of redundancy like RAID and other things to make sure that we can recover that state. Okay, snapshots, which are read only or clones, which are read right, are very cheap and plentiful. There's effectively no limit to them unless you run out of disk space or other minor details like that. But unlike an overriding file system, it's really easy to do it. You just take a checkpoint and then you just sort of save a copy if you will of that uber block effectively, or actually it's a lower in the tree, and you just, that's your snapshot. And since nothing is ever being overwritten, as long as you don't free any blocks, it's just going to be there. And so the cost of taking a snapshot isn't much more than making sort of a, making note of the fact that you have the snapshot and then taking a checkpoint and boom, you have it. By comparison with an overriding file system like UFS, because we're overriding things, every time we go to write something, we have to go, oh my, are we changing something that's part of a snapshot? Do we need to make a copy of it, blah, blah, blah, blah, blah. And so the more snapshots you have, the more of that checking that has to happen every time you write a block. Now there's little tricks that we have, so we don't really have to check it all that much in caches and things. But nevertheless, it's painful and it's work, and the more snapshots you have, the slower it goes. And for that reason, we administratively limit you to 20 snapshots, because when you get more than 20 snapshots, the overhead becomes too painful. And we did snapshots in UFS because we sort of needed them for doing background FSC, and we needed them for some sort of system administrative stuff, but they've always been a sore spot. And so once ZFS came along, it was just like, great, they just do snapshots really well. If that's what you really need in your environment, you should be running ZFS. People say, well, now that we have ZFS, is this just going to completely replace UFS? The answer is no, probably not. ZFS works really well, especially on giant pools of data where you have a 64-bit processor and lots of memory and lots of processing power. You're not really going to see it on running much on your Beagle phone. There's a small embedded sort of system you probably want a much lighter weight file system. UFS has far fewer features than ZFS has. If you need that feature set of ZFS, then you should be running ZFS. If you just have a small embedded system, probably UFS is what you want. All right. Other things that ZFS has to help give it better reliability is the metadata redundancy and data checksums. In the case of UFS, if one of your indirect blocks gets trashed and you're not running with RAID or something so that you could reconstruct it, then you just lose that part of that file. With ZFS, you have, of course, RAID in the background typically to help you, but all of the metadata is duplicated. Every INOT is duplicated. Every indirect block is duplicated. Anything that's metadata having to do with the file system has two copies, a minimum of two copies. If you're particularly paranoid, you can say, well, I actually want redundancy of the data itself. It'll make two copies of all your data blocks and for good measure three copies of all your metadata in that instance. When I talk about the block pointers, you'll see how that actually ends up being implemented. The other thing is that you have checksums on all your data blocks. Those checksums are not stored in the data block itself. This actually helps. It gives you better protection than if the checksum is stored actually with the data block itself. The place where this really helps is where you get what I'll call the stray rights. You probably know on the back planes when data gets sent out across to some IO device, there's a parity bit on the data lines so that if a data line bounces, then the parity bit will protect you. They'll let you know that the data came through badly so they can get resent. For a long time and in some buses still, there's no parity on the address lines. If a bit flips in the address line, who knew? The sender thinks they send it to one place. The receiver says, okay, that's where they want it and bam, it just goes to some random block on the disk. Not only does your data not get written where you do want it, you overwrite something else that you probably didn't want to overwrite. If the checksum were stored in the data block itself, then when you read that block back and did the checksum, you'd think it might be okay. But by having the checksum not in the data block, now you read back, presumably from the place you thought you wrote it, and you do the checksum and it doesn't match. Or you read in the block that was accidentally overwritten, again, the checksum tells you it's not right. That's a very key benefit that many people sort of miss because historically the checksums were being stored in many file systems in the data blocks themselves. So that's another key step to keep track of. All right, you can have selective data compression, selective deduplication. Deduplication in particular because you've got to keep track of essentially a table of the fingerprints of all the blocks that you're trying to deduplicate, they kind of need to fit in memory because as soon as it doesn't fit in memory, it starts to get really slow for checking for duplicate blocks. And as a consequence, ZFS does not require that you do it all or nothing. It's not like, well, I got to deduplicate everything or none of it, nothing. You can be selective about where the deduplication happens. And so you just deduplicate file systems that have VMN images or something where there's a lot of blocks to deduplicate because you've got 18 copies of the Windows image in there. And almost every block is a duplicate of one of the other images. Similarly with compression, you may have some things where it's data that's not accessed a lot, so you would rather just have it be compressed. And there's other things where it's stuff that you're using all the time and so you don't want to have the cost of decompressing all the time. So again, you get to selectively decide whether or not that's going to be done and which algorithm you use and so on. One of the other things which really differentiates ZFS from the traditional UFS is this notion that you have a pool of storage. And so there's just this big block, a pool of blocks, and then they can be doled out to file systems as necessary. So in UFS, you've got to say, alright, this file system has this many blocks and this one has this other number of blocks. And if you guess wrong and this one starts to run out, you can't say, well, actually, go borrow some blocks from that one over there. Like, no, no, it doesn't work that way. So you can sort of grow them if you happen to be conveniently left some space or something like that. But generally speaking, once you've picked the size, that's what you're stuck with. Now, of course, the problem there is you're in this big happy pool until some clown decides to use up all the space. And now every file system runs out of space all at the same time. So in fact, there is the ability to, first of all, put a limit and say, alright, that file system isn't allowed to have more than this amount of space. And then it's out of space even though there's still space left in the pool. Or conversely, you can reserve space and say, this file system is, you know, it's got our log files on it. It's kind of important. And so we're going to guarantee that it's going to get at least this amount of space. And so, again, the pool will ensure that enough space is saved set aside so that you will always be able to get that amount of space into that particular file system. We also, in ZFS, you sort of think differently about file systems. In UFS, you know, a file system is like, oh, well, gee, you know, do I want var in user to be in two separate file systems? Or should they be together and all this kind of stuff? In ZFS, creating a file system is about as complex as creating a snapshot. It's like, oh, you need another file system? Sure. So you can just give, like, every user's home directory can be a file system if you want. And, you know, that's just, it's not a problem. You know, special orders don't upset us. And so you can have hundreds of file systems within a pool. And then, of course, that allows people to take snapshots over at the granularity of file systems. So if everybody's home directory is a file system, then everybody can take a snapshot of their home directory. Okay. There is RAID, and in the parlance of ZFS, it is RAID Z, where Z is what you think of as being kind of a variable. So the idea is, in most RAID systems, you have a fixed number of blocks, or fixed number, yeah, a fixed block size. So if you've got five disks, then you typically have, like, one block off of each disk, and that makes up the RAID block. And if you write less than the full-size block, then you have to read in the, you know, the parity and recompute it and write it back out again. So the idea of RAID Z is that the ZFS is keeping track of the size of blocks. So it just, the size of a block on the RAID Z is just whatever size it needs to be. So if you're writing out a block that needs sort of three chunks off of disks, then the size of that particular one is three, and then you need one that's size eight, so it's just eight and so on. And again, I'll have a slide that shows how this works. Then with RAID Z, you then get to decide, do you want single, double, or triple parity, which is to say, can you have one, two, or three disks fail before you can't recover. One of the other big issues with RAID is that you get silent errors. So you have, you know, some huge pool and some sectors go bad, but you haven't read them, so you don't know that they've gone bad. And then when you go to reconstruct, suddenly you can't reconstruct because there's these bad sectors you didn't know about. So one of the other things that ZFS has is this notion of scrubbing, and that's where it goes through and just makes sure that all of the blocks that are in use are actually readable. So that, you know, you can find out what, before you need to recover a disk. That, in fact, there's a problem there. Okay. Apparently. Yeah. Yeah. Yeah. It's one way to get out of giving a talk. We'll run the talks like everybody out everybody out.
Much has been documented about how to use ZFS, but little has been written about how it is implemented. This talk pulls back the covers to describe the design and implementation of ZFS. The content of this talk was developed by scouring through blog posts, tracking down unpublished papers, hours of reading through the quarter-million lines of code that implement ZFS, and endless email with the ZFS developers themselves. The result is a concise description of an elegant and powerful system.
10.5446/18678 (DOI)
I'm Brooks Davis. I'm with SRI International, and I'm part of the team of researchers working on the Cherry project, which I'll get into in a moment. I'm here to talk to you today about CherryBSD, which is our fork of FreeBSD to support the Cherry processor. In many ways, similar to, for instance, the TrustedBSD project or the HardenedBSD project, and that we're going off in another direction, we're going off in a direction that FreeBSD isn't ready for, for one reason or another. In the case of TrustedBSD, it was that there was large swaths of fairly disruptive technology that had to be proved out at scale before they could be merged into the tree, before you're going to make changes to thousands of places in the code. CherryBSD is a little different in that CherryBSD is about porting FreeBSD to a new CPU with new instructions, a new C compiler, some significant changes to the C language. So a wide variety of things that obviously we can't just dump in the FreeBSD tree, nothing else you can't buy the CPU. So people might object to large-scale changes for something you can't even use. But first, a little background. As hardly a week goes by where you don't hear about some new malware, some new breach or malware problem or whatever, Anthem losing 80 million customer records, the banks losing hundreds of millions of dollars in things like the target breach. Or I think this one actually, if I remember right when I created this slide, this is a reference to the banks simply failing to notice hundreds of millions of dollars in fraudulent transactions that were made to move money to another bank, and then they took the money out and poof. Or one of the most recent ones, the Office of Personnel Management in the US, basically HR for the civilian part of the US government, they lost 3.2 million HR records. So basically everything you need to know to steal someone's identity. So this is the daily reality of computing and the internet. So we decided it was time to do something about it a bit. The approach we're taking is one that's been taken for quite a while, which is application compartmentalization. Compartmentalization decomposes software and isolated components. Each sandbox runs with only the rights needed. So you can allow a least privilege approach. One common example of this that you probably as Unix users use every day is SSH has privilege separation. So that the most risky bits that must run as root, the bits that must run as root are separate from the most risky bits that do all the crypto handling, or as much as possible. And the goal here is that you can take an application, you can start with an application like in this example, GZIP, and you can cut it up into multiple pieces. So the compression logic, which is what people screw up because they write it in tight C code, designed to be fast, designed to trick the compiler into generating the best code they can, at least 20 years ago whenever they made those decisions. You can put that off in a process which has limited rights. In this example, this example maps pretty well to Capsicum, which we already have in FreeBSD. It is a process-based framework where we have capabilities, which are unforgible tokens of authority, are file handles, very fundamental thing in Unix. And you can enter a capability mode where you cannot open any new file handles through arbitrary namespaces. You must obtain them via other capabilities. And you can restrict the rights on those capabilities. So for instance, in the GZIP example, what GZIP does in its most basic mode is it opens one file, and it opens another file handle, creates a new file, and it does a bunch of work and then copy moves data from one side to the other. You obviously don't want the input file to be writable. You don't want the output file to be readable, or that's probably fairly harmless. But in principle, you don't really want that. Capsicum works great for something like GZIP. It works great for OpenSSH, where the existing framework was already, the existing privilege separation was already based around Unix principles. But it doesn't scale to the kind of things you might want to do. For example, in a web browser, right now, a web browser like Chromium has a separate process for each tab, at least until you open too many of them. The problem is that if you have too many processes, you start to run out of TLB entries on your CPU. I know of one instance where, in addition to running out of TLB entries, some systems are running out of address-based identifiers. So even though their processes are often quite small, if you have too many of them, you still get TLB misses simply because you've had to reuse an address-based identifier. So there's some serious scaling problems. And that's even just with tabs. What you'd actually like is to render every image in your browser in a separate sandbox so that when something does go wrong, the email that somebody sent you that had a bad image embedded in it can't read your bank statements, can't hijack your password reset URLs, all of which are in your email and all of which are in that tab and in that process. So we're trying to make this, we want to make this scale. So we're doing that in hardware. So as I was saying, with process separation, you can avoid this problem where you have one process here and as a pointer to a buffer in some other part of the program or in some other part of the application. If it's in another process, this arrow doesn't work. And we'd like to do that in the single application and have that mechanism for every pointer. So we can have every C object be a separate thing that you cannot manufacture a pointer to. So with Cherry, we are doing that with capabilities. So we've created fat pointers. These are 256 bit pointers. Yes, that's big. So each pointer has an offset which is in practice the pointer. It is where you're pointing in memory. And you have a base and a length, which are relative to your virtual address space. And there's guarded manipulation of this, which means that you can only create a capability by deriving it from another capability. And the only derivations that are allowed are to shrink it. So you can increment the base and move up in address space or you can shrink the length. You can also reduce the permissions of the pointer. One important thing here, though, is that, and the reason we have an offset, in our initial design, we did not have an offset. It turned out that in real world C code, that's not OK. Because you can't just keep shrinking your data. Because what you do is if you see an application program like FFMPEG, inside LiveAV codec, there is code where they pass a pointer to the middle of a piece of data to be decoded. Because the compiler generates better code that way, because it can use small, positive, and negative intermediates. So we have this offset, which allows our capabilities to be near perfect replacements of C pointers. And the reason this works is that you only check that the offset is inside the base and length at dereference time. So as I alluded, we want these to be C pointers. So we've done that. And we have two modes where you can use them. One is a hybrid mode. This is where you have conventional, in our case, MIPS64 code. And you annotate some of the pointers in your code as being capabilities. That means the compiler saves more space for them, uses the correct manipulation instructions to access them. And it works. So you can use this in code if you want to just protect a few buffers that are very, very important. But it is a lot of work. We made some changes to TCP. And I'll talk a little more about them later, where I added bounds to the packet buffer that was being dissected. And that worked, but that was many days of unpleasant changes to the code and thousands of changes. So we ended up adding a pure capability mode, where every pointer is a capability. So in translation units or files that are compiled in this mode, all pointers are 256 bits. They're all bounds checked. Almost all bounds are correctly inferred. So if you malloc something, you get a pointer that's exactly the size of the object that you allocated. If you try to go outside the bounds, things go boom in a nice predictable way, rather than randomly corrupting your memory. So as I say, we have a range of ABIs here. So we have the Pyramid 64. This is conventional FreeBSD runs on this processor without modification. And if you use it in this mode, you get no benefits, but it also doesn't require any work. The important part is to work your way through. And so there's a bunch of different approaches we can take. But we have a lot of hybrid code today and some pure capability code. We're increasingly moving towards pure capability code. And here's an example of the sort of things that we can support with the processor. So this is our current situation for the most part. We have a conventional kernel. It's running standard MIPS 64 code. We have a conventional application, also running standard MIPS 64 code. And we have Zlib in a compartment. Zlib's chosen because it has a small interface and it was easy and it compiled. And it has pure capability code inside and then a little hybrid wrapper around the outside that lets us call in. The application, in one example that we've talked about in one of our papers, is a GIFT to PNG. Doesn't know that the library is protecting it. And in fact, there's essentially no performance impact in this case. But if there's a bug in Zlib, it will fail stop. And even if it doesn't manage to fail stop, it's very difficult to gain control in the application. If it's a pure, within the bounds of C, control flow bug. You can also see a world where we're working right now actively to add pure capability application, which means yet another syscall interface. And pure libraries, one interesting thing though, and this example actually isn't ideal. But one of the interesting things is we can put conventional MIPS 64 code, say, to a binary library that we don't have any source code to for either because we bought it from someone who doesn't want to give a source code or because we lost the source code, which I've heard of some large internet companies doing. And we can put that in a little sandbox. And we can have a wrapper interface. So our library can be pure even though we have this bit that we don't have any control over, it can't get out of its sandbox. So it's less likely to fail stop if something goes horribly wrong. But what it can do is very little. It has access to a few, it might have, if it's a proprietary video codec, all it can do is write the wrong video frames and read some video frames, which it was supposed to be doing anyway. Because also, you can go farther along and of course start writing a microkernel, having a single address space application, lots of neat things. But we're quite a ways from that. If nothing else, LLVM doesn't yet compile MIPS 64 in a way that's usable for a kernel. So just a little bit more on the CPU. We have a prototype CPU. It's a 64-bit MIPS CPU, sort of R4K, so nicely out of patent. It has the Cherry ISA extensions that I've been talking about. It runs at 100 megahertz, which is a bit slow. But we lived with it in the 90s. And we actually have quite a bit more RAM, which helps. Although we have a gig of RAM on the boards and could have four gigs of RAM, which would have been a little unimaginable when I was in college, which is mostly good, except occasionally it gets a little exciting. One of the early bugs we had was a bug in the SD card controller that we got from the FPGA manufacturer that if you wrote a byte to the buffer two cycles in a row, it threw the second one away. There literally was code in there that said, we got two writes in a row, throw it away. We commented that out, and it works better. But one interesting thing is we only found this problem after reboot, because we had so much RAM that the entire file system fit in the buffer cache. So it was only when we flushed the buffer cache by rebooting the machine that we discovered that we'd corrupted the on-disk data structures. And Fisk would say, oh, what have you done to this poor file system? So it's an interesting one of the interesting challenges here. But the neat thing that we have is that we have this CPU. We have this operating system that I'm going to talk more about, and we have a modified LLVM. So we can run real software. And that's where things get exciting. We can really test things out. So CherryBSD is, of course, the BSD to support Cherry. It's a mix of platform support, which is to say drivers for things on the FPGA and peripherals, which is the very CPU. You can compile our CPU without the Cherry bits if you want. It's open source. And if you want to do things like run an hardware class, you can use that fairly simple CPU and add things like a better branch predictor as an exercise. And people at Cambridge have done that. So there's also support for the new ISA features. I'll give you a little rundown of how much changes that was in a bit. There's infrastructure to support the compartmentalization of libraries or whatever it is you want to stick in a compartment. There are some custom applications. I'll talk more about TCP dump and a few others. And then there are a bunch of build system improvements, because we're doing lots of slightly weird things. So we've done a lot of work like we have the ability to build free BSD and install it in a directory without any privilege, which is now being used in parts of the release build infrastructure. We did it because we wanted to have grad students building Cherry BSD. And I didn't want to give them a root and mess up our nice fancy machines. Here's a snapshot of our page on GitHub. So you notice almost 6,000 changes here relative to free BSD. And we're actually farther. And there's quite a few more than 6,000 now. That was a week ago. And we're a bit behind. We merge periodically. Talk quite a bit more about merging. But the main thing is, and what I think makes our project interesting, is that we both want this to be publicly accessible, because we have collaborators, which means we can't do things like rebase. But we've merged probably 200 changes or 300 changes out of this tree. So we have this huge set of changes that we have to maintain over time, which leads to some challenges in version control. So next up, here's a breakdown of the kernel changes. This was in our paper at IEEE Security and Privacy. So we added a bunch of headers, lots of various things, lots of access to assembly functions. There's some setup of Cherry and the kernel basically saying, turn it on. Give me the default capability, which at boot, there's a default capability, which is the address space. And you can start chopping that up later. There's contact switch code, because our new capabilities have to go into special registers. So we need to save and restore those registers. There's exception handling, because we've created all sorts of new ways to cause exceptions, since now an out of bounds pointer to your reference is a trap. There's some memory management, memory copying, swap. There's some support for the actual departmentalization bit, which is somewhat separate from the memory safety. Bit for system calls, bit for signal delivery, et cetera. Few thousand lines of code, which actually is not too bad, because 3DSD is several million lines of kernel code. And we've actually written a very tiny little microcernel before that's not very big. I don't even think it works anymore. So in addition to kernel changes, of course, had to make some tweaks to the runtime. The first, the biggest thing is mem copy and mem move need to be capability aware, even in capability oblivious code. One of the interesting things here, and one of the reasons why it's really important to have a team as large as we do, is actually our first version of the ISA didn't allow you to implement mem copy efficiently. You actually would have had to check every 256 bit chunk of memory to see whether or not it was a capability, and then copy it with capability instructions or copy it with regular instructions. That would be insane. So you can now use capability instructions to copy non-capability memory. And it actually turns out that 3DSD's generic mem copy and mem move implementations in C required almost no changes. We just had to tell it, oh, just use the right size as the basic word size when doing copies. And it generates the right code. I was actually pleasantly surprised. We, of course, have assembly versions as well that are a little better, but that's the basic. The nice thing is with the C integration, we get a lot of that. We also had to add explicit versions for hybrid code. We had to have explicit versions of things like string and memory manipulation functions so that we could have a mix of arguments. That, I think, I'm not sure that's going to stay. But in our current infrastructure, that's how it has to be. And then there's a bunch of interesting cases. In current C code, like in the sterln function, it assumes that once it's aligned to the front to the size of a word, then it can always read whole words. But we have byte granularity restrictions. So if you can definitely read the next byte, that doesn't mean you can definitely read the rest of the word. So that had to be fixed. It was easy to do, but required a bit of change. And one thing that we're working on right now is taking the syscall implementation in libc and moving it into a lib syscalls. So yes, I'm going to be asking for X runs on the ports at some point here, because that's going to be exciting. But it'll both help us, because inside a sandbox, we may want to attempt to make syscalls and have the kernel mediate based on some new configuration that's capsicum-like. Or we may want to have something that just simply proxies the syscalls back to the privileged part of the application. It can make the decisions. It's also be useful for people like Google, who are building Android with FreeBSD. And their syscall layers, obviously, totally differ. So the separation, I think, will be generally useful. Next bit, bits for compartmentalization. So libcherry is the library for creating sandbox libraries. Or for creating sandboxes, you can instantiate objects. You can give them types. And then you can allocate copies of the objects, do resets, that sort of thing. So that's the core functionality for implementing compartmentalized libraries or compartmentalized TCP dump, for instance. It also has a loader and a runtime linker, since objects have a new calling convention. And we don't want to have to write horrible little RPC-like things, where you say, well, I'm going to pass eight integer arguments and eight capability arguments, and try to remember which ones go where and whatnot. We did that for a while. It sucked. We're also working on a syscall, it also includes some syscall implementation bits. So compartmentalized can call out to an object in the privileged process. Something with a confusingly similar name, we have the libcherry directory. This is where peer capability versions of libraries live. It's also where the objects live. It's a lot like the 32-bit support for 64-bit machines. In fact, I copied and pasted this stuff in makefile.inc1. One reason why merges suck is lots of turn in that file. So eventually, right now, we're only using those libraries to build little compartmentalized libraries that have a hybrid outside interface. Longer term, the plan is to have pure capability libraries. So as part of your transition from a conventional ISA to cherry, you'll be able to pick points along the way, not only at the library level, but at the application level. So you can have your most important or, well, either most important or easiest to modify applications be pure capability. And other applications can remain as they are. You can still have your binary only whatever program it is that you have. So we also have demo applications. This is a screenshot here or a picture here of the very weak PowerPoint-like program that I wrote as a demo. So we actually gave our talk to DARPA at the principal investigators meeting on our tablet, which you can see Robert holding here. And when we got to the end, we said one more thing. We're running on our, not only are we running on our hard drive, but the slide deck hasn't exploited it and triggered a Trojan. And Cherry successfully defeated the Trojan. I don't know if it was very hard since I wrote all of it. But nonetheless, the technology did work and does do this. So we've got a bunch of little custom applications as part of CherryBSD. Those don't present any particular challenges for us in terms of CherryBSD maintenance. The thing that does present bigger challenges is we're also modifying existing applications. So at one point, we took a look at what's compartmentalized Wireshark. Wireshark's full of vulnerabilities. It's a giant program. We spent quite a bit of time on it and said, well, that was insane. It's 3 million lines of code and uses Glib. And it's all very complicated. We decided to do TCP dump instead, which has the advantage of being in the base, which is kind of good and bad. So our first version, we had a version that was very simple. We just compartmentalized a section, and it was standard MIPS code on both sides. It was just a little bit of hybrid code to get in and out of the sandbox. We later added memory safety, which I alluded to before. And that's 6,000 lines of changes, a whole lot of not fun. And part of that, though, was actually one of the things that we learned an interesting lesson, which is that we had no ability in TCP dump. There's a modest amount of code that advances the buffer, advances the buffer, advances the buffer, and that's, oh, whoops, I forgot. I want something a little back behind my pointer. That's fine in C. That's a perfectly legitimate and reasonable thing to do. However, before we had the offsets, that didn't work, because we'd incremented the base, we incremented the base, we're going to base, oops, can't go back. That's not allowed. So we added offsets. That helped quite a bit. I added per-particle disector. So this is the most protection you could get in TCP dump, which is to say every protocol lives in its own sandbox. So as your dissecting IP goes fine, TCP goes fine, you call into HTTP, or maybe probably more realistically, you call into something like SNMP, the ASN1 parser is broken as it always is, and gets exploited. You can still trust the TCP and IP dissection, because you've failed in a deeper sandbox. So that required a modest amount of code change. You had to change all the call sites. That's pretty reasonable. We went to pure capability mode. That got rid of tons of annotations, which was nice. And then the pure capability mode, though, removing annotations, was actually driven by the fact that FreeBSD got a new version of TCP dump. So this is one of the things that's really interesting about working on a real world project and keeping your tree up to date, is that we learned some lessons about maintainability the hard way. We tried to do this import here. There were 3,600 conflicts in Git. Seemed like a bit much. So we made a bunch of changes, got back to here. When we added the linker support, that reduced the amount of junk in the code base. And actually, this is dropped dramatically, because somewhat surprisingly, the TCP dump maintainers accepted my change to shuffle 1,000 lines of code around within the tree and put it in different places. So the interface between the deceptors and the front end is both fairly narrow in that it's only five or six functions and fairly simple, which should help build a better capsicumized version of TCP dump, but also simplifies my code. We'll see how my next merge goes, because the approach I took when I upstreamed it is a bit different. There's also a bit of infrastructure work we've done. I've alluded to force of the unprivileged builds. We've added some hacks to let us do change the compiler on a per program and per file basis. That's because early on, our custom LLVM was not robust enough to actually compile all the code. So we wanted to focus on the code where we could do something interesting. And then over time, expanded out, it's getting close to be unable to compile everything. I spent quite a bit of time the last few weeks trying to compile things and then sending David Bug reports when the compiler crashes. And some other hacks, which will help some upcoming changes, like we strip binaries during the build rather than using the install program, things like that. For more information on the general what's in CherryBSD, particularly Barry's stuff, I suggest reading the journal article that I wrote a few months ago. Pretty good. Now on a bit to revision control. This is one of the places where we've had a lot of challenges. We started off in Perforce, which was the conventional way that people did forks of FreeBSD in the past. Supported on FreeBSD project infrastructure, which is good. Merging is very good. It really is a good way to maintain something that's a fork of FreeBSD in the long term. I've done it at previous jobs as well. And it's easy to maintain what I'm calling here stacked branches. So for a while we had a BarryBSD, which was the platform bits. And it sat in between CherryBSD and FreeBSD so that we could try and keep, maintain some separation there. We merged everything in the BarryBSD branch. That wasn't too weird and then got rid of it at some point. But that was nonetheless quite helpful. And our team already knew it. So that was a good reason to stick with Perforce. Downside is Perforce sucks at public access. You have to give people an account to give them access to the system. Every checkout involves adding server state. So even if we were willing to give out a lot of accounts, eventually the project would run out of resources. It's very easy to get the situation where your Perforce server needs to have a half terabyte of RAM. We probably would have had to buy it for the project in that case. Adding users is a bit annoying. And the offline support isn't very good. That's not too big a deal, but it's not good. So in October 2013, we decided we needed to have public access. There were people at MIT Lincoln Labs and some other places who wanted to start using CherryBSD. So we wanted to give them direct access to the repository rather than having to take the time to package up dumps and package up snapshots and push them out and QA them and all that. So we moved over to GitHub. We lost a little history granularity in the process because many of the commits couldn't just be applied one at a time because things had moved on and they'd been merging. But really not too bad. However, it was a bit of a trial by fire for using Git sort of at scale. FreeBSD is a bit on the big side for Git. And our export has some weird features that it'll get to. Also, I and Robert, who are the main CherryBSD developers, were not experienced Git users at the time. So lots of excitement. It's not clear that our model was the right model. But it's the one we've got. So we're kind of stuck with it. What we ended up doing is we forked the FreeBSD repo on GitHub. One thing that I found is kind of weird about GitHub's forking model is if you want to fork CherryBSD, it seems that you get a copy of the FreeBSD repo. That's what happened when I tried to do it recently. That might simply be because I already have a copy of the FreeBSD repo. But so if someone wants to try it, it's only a gig or so. GitHub will never notice. But that was a bit odd. And we did all the commits to the master branch. I'm not sure that's the right solution, but it's what we've done. And at this point, we're stuck. We're not going to do a forced rebase and mess everything up. And then we merged changes from the FreeBSD upstream periodically. Typically, the typical working model is we merge changes when we need something. Or after a big deadline's passed and we realize we're behind, we'll do a merge just to sort of catch up while we have some breathing room. So our first attempt is sort of the basic obvious thing that you might think to do is we fetch upstream. We merge master into our current tree in a branch. But nonetheless, we just merge it. It mostly works. The first few times, it went pretty smoothly. There were some sort of strange looking conflicts that I didn't understand at the time. But it worked. We got past them. Although one annoying thing that we still not haven't resolved is that rebases go horribly wrong. So if somebody else pushes, well, you're doing a merge, you just have to throw the merge away. Rebases never work in this case. So however, after a few times, we came along. We started doing work integrating the VT console stuff into our tablet platform. And I did the merge. Everything compiled. Everything seemed to work. Pushed it, and something was broken in VT. Who knows where? It wasn't due to a merge conflict. It was actually due to an API change, it turned out. The problem is, so this is sort of a notional model of how it works. You're going along, you're going along, you're going along. You merge upstream. It's all good. Problem is, this is more like it. And this is actually much simpler than reality. Reality is you're going along, you're going along, you're going along. Thousands of changes occur. And then you merge from upstream, and you pull in another three months of development, which is typically several thousands of changes. And if you try to bisect, well, in our case, it was an API had changed. So all these were fine, but they don't include any of our code. And all these are fine, because the problem was here. Well, here, I guess, technically. And so there's nothing to look for. So that was really annoying. Ed found it eventually. So I ignored it for a week. I didn't need IOS, so it was OK. So I wrote a tool that I whimsically named Mergeify. It merges one commit at a time. Because from the perspective of a consumer of FreeBSD, every change is a feature. That's not perfectly accurate, because sometimes it's a commit that's broken, and then another commit that fixes it, or a commit that's broken, and then a few commits, and then a commit that fixes the previous one. There's no way to deal with that case in a sensible way, so I just punted. But merging one commit at a time does help in that the overall system is much, it does help in that you now have commits you can bisect. Each of those merge commits is useful. The feature I haven't added to the tool yet is something to knock out all of the child commits from the bisect. It would be pretty easy to do. I just haven't written it yet. So that you only consider the commits that actually change your branch. It's one of the key things that I figured out over time is that from our perspective, it's only those merges that matter. Everything else, it's got some history who cares. So the first attempt, we just merged every commit. And then TCP came along, and there was an update in contribute, and things went really strange. We got these merge results that were completely nuts. Mostly, it was the top level make file would get something from contrib squished into it. Turns out that what's happening in the FreeBSD export is things in the vendor branch have a common parent with the FreeBSD tree, the empty repository. So in fact, there's a slash make file in both of them. And Git says, they've got common blank lines, squishes them together, and really goes badly. The problem is actually, you don't care about that commit. It's not important. What you care about is the commit that merged that into the FreeBSD tree. So I changed the code to only pick the direct commits to the branch and merge each one of those one at a time. I was going to do a demo here, but I'm getting a little short on time, so I think I will skip the demo and come back to it at the end, particularly since there's a bit of fussing with the projector to make it work. So I alluded to be for rebases broken. And I think it's, again, because rebase is applying all the commits individually rather than those individual merge commits. So it's on my list to attempt the change to try applying those commits one at a time, basically re-implement rebase. Should be doable, but I haven't done it yet. I have an upcoming merge that looks like it's going to be exciting, so maybe I'll fix it then. So I've got some to-dos for mergeify. I need to add this rebase mode, this bisect mode I've talked about before, where you can skip all the commits that don't make any sense to look at. And one thing I would like to do eventually is periodically, say, may every 10 commits or every 100 commits, check that things build. Right now, I don't get to do that, which is sometimes a little frustrating. I get to the end and I discover get botched a merge somewhere, or I botched a merge. My current workflow is anytime I do anything by hand, I assume I screwed it up and do a full build just to make sure that it's right, that I'd really like to do it every commit. That would be really slow. Right now, it takes several seconds to merge each change because get is fast, but even if it's all in RAM on a fast machine, previous these repo is big. So this would take quite a bit longer, but hopefully, Simon Gerrity's meta mode will make it fast enough that I could try every 10 commits or so, and I could bisect within that once I knew what was going on. So on to another topic, upstreaming. The best way to remove merge conflicts, of course, is to upstream your changes. That way, they become everyone else's merge conflicts, and not mine. And if that section of the make file is the same, then things are good. Particularly, the top level make files in FreeBSD, I have made so many changes to them. I get conflicts all the time. Just a bit. Anyway, so there's some questions of what to upstream. I think there's some philosophical questions here. The answers will vary depending on your project, the nature of your work. Obviously, drivers for things that people can use should be upstreamed. And drivers are one of the better things to upstream as the owner of the driver, because it means when somebody changes the infrastructure, they have to update your driver. And you don't have to come along three months later and say, what the heck happened? Why does my driver not work anymore? Why does it not compile? That sort of thing. Also, general infrastructure. We've built quite a lot of infrastructure along the way. So we've been upstreaming that as we can, or as we have time. So things that are shared by multiple external consumers. I like to try to upstream things that are useful to multiple people, even if they're not useful to most consumers of the project, are not entirely useful in the base system. If you do a cleanup that is critical to you, but doesn't matter to the base system, but other people are using it, I think that's a good thing to upstream. But that's a little tricky in that those are the sort of things that get broken. And I've had some issues with that. And also, things that are just low impact and are likely to generate conflicts. So we actually have a new signal that we've been meaning to upstream. Because Capsicum might use it eventually. It won't immediately. But every time someone adds a new signal, it creates a ton of conflicts. Because in every single case, it's adding something in exactly all the spots we changed. So I think we'll upstream SIGProt soon. So things we have upstreamed. FDT support for MIPS, so a flattened device tree. It was there. It didn't really work. We fixed that. Bunch of drivers, bunch of driver improvements. We added a way to turn on the floating point support that was in the kernel. And then we made it actually work for MIPS. We added bootloaders for MIPS. We've added the unprivileged build stuff. It's a bunch of other stuff. And I've had it for four years now, so lots of things. We also have some sort of related upstreaming we've been doing, not to FreeBSD, but to other projects. So if you saw Stacy and Sean's talk earlier today, the QMU user mode work that's letting us build ARM in MIPS 64 packages, which will soon be official, that's work we did because we needed packages. Turns out we don't actually use them very much right now. But notionally, we knew we were going to want to do demos without a tree code eventually. So we needed some way to build them other than trying to build 2T or something on 100 megahertz MIPS. Didn't seem like a good idea. Meantime, to reboot good power failure would be an issue. So we've done a lot of improvements to Clang and LLVM on MIPS 64. The imagination technology people's focus is definitely on little Indian 32-bit MIPS, and we're big Indian 64-bit MIPS. So we've had to fix a lot of things. And I've been upstreaming stuff to TCP dump, mostly to make my life easier merge-wise. But they seem to like the direction, the compartmentalization direction. So that's nice. Internally, we've been doing some releases. We've done them internally for a long time, mostly snapshots. Initially, it would be periodically I would do a release build using our little build system and push something out and put it on the wiki. We've done a couple of restricted releases to partners. One of the problems we have is with the FPGA, the license agreements mean you can't share compiled bit files. So we give them out to a few people who are in the program and we know how it licenses. But it's sort of the AT&T Unix problem that Kirk's alluded to. And we started doing public releases. So we just did our first public release three weeks ago, I think. So cherrycpu.org, you can download our CPU. You can compile it. Don't run the bit file that comes out without some careful testing, because at least the ones that our Jenkins build is producing, the fan doesn't work. Seems to run OK as long as you don't do too much. Then it gets hot and things misbehave. So another small change gears. Since my focus on this is a free BSD developer, some tips for developers, many of which go around the idea of as you develop new ways to build the kernel or new ways to build the OS, you spend an awful lot of time compiling. So we've got some suggestions for ways to deal with this. The first and probably most important one, use a big enough machine. Seriously, if somebody is paying you to wait for compiles, they can buy quite a lot of hardware fairly quickly for what they're paying you. My view is you want enough RAM to hold all the source and all the output. We know 128 gigs is enough. John Anderson has one that's working well for him. The machine we use is a 256 gig machine. We have fast ZFS, a Z mirror with a half terabyte of SSD. Works pretty well. I would say that 128 gigs should be enough. 256 gigs definitely should be enough for anyone, except last week when there was a compiler bug. And we had a dozen clang processes using 70 gigabytes a piece. We ran out of swap. It was exciting. And it turns out, well, they ran out of swap, and then they started dumping core. And you can't kill processes that are dumping core, so that was unfortunate. But usually it's big enough, and we literally could not do the work we're doing without this machine. It's only about $5,000. It's definitely worth it. Another thing that I've found really helpful is I use a little push notification service that has a rest interface. So the one I use is pushover.net. There's a web browser-based client that's $10. And there's Android and iOS, which also are $10 a piece to activate. You couldn't write the service for that, so it's totally worth it. And I have a little command-blend wrapper. So I run this command-notice, run a command, and then I get something like this on my screen and a bing and my phone buzzes. And it's really great at reducing the latency between when my compile finished and I noticed that it finished. So that's been really helpful. Another thing, I build everything in Tmux sessions. And actually one of the things that I discovered part way through is really important to switch away from the build. Do not send it over the network. Even over gigabit locally, enough stuff buffers that it significantly delays your ability to get back to the prompt. So better to not render it at all. That was slightly surprising. I found that at Cambridge. I knew it was an issue over the Atlantic. I would switch away when I was tethering. So I didn't just waste all that bandwidth. But it turns out, yeah, always switch away. And I guess one final little tip, continuous integration is really great. We do full OS builds after every compiler or OS change. They take about 20 minutes. We also do full releases at least a couple times a day. Keeps everything working. And one of the key things for us is because our architecture is weird and we are making strange changes, we build both Cherry, MIPS 64, and AMD 64 all the time. AMD 64 is actually a really good canary because it also builds I3D6 libraries. So that means we build a whole lot of stuff and just keeps us honest. We are a research project, but we really want this stuff to be used for real. So we don't want to get to veer down some blind alley in code changes and then come back and say, oh, I have three months of work to get this thing functional again. We're into the shape where I could merge it. So we do that all the time. And also, when we're working on a release now, we create a separate set of Jenkins jobs to build the release daily in the release plan, just to make sure everything is stable. I'll mention some of our papers that we've got published. We have three top two papers in the last year. We had a Cherry hardware paper. So here's the hardware paper. The Cherry capability model, we're visiting risk in an age of risk. That's kind of mostly what I talked about earlier. We have beyond the PDP11 processor support for a memory safe C abstract machine. That was an ass-bloss. If you're a C-geek, it's a great paper. I definitely recommend it. I learned all sorts of weird things about C writing the paper. So that was fun. And then we had a compartmentalization paper at IEEE Security and Privacy a few weeks ago. There's also an ISA document and whatnot on the Cambridge website. We have a future work here. So we're working on a pure capability for VSD. So probably it'd be a very long time, even once hardware exists, before you'll ship a version of VSD that only uses pure capability code. But it's pretty likely that you'll want to run a fair bit of pure capability code. And we also realized recently that the best way to get a ton of code running is to be able to have a pure capability build within free VSD. So we can just compile everything and try to use it, run it through the test suite and see what happens. It'll no doubt be exciting. We're also working on, we'd like to add Cherry to the kernel. The code we have in there is all assembly or macros around inline assembly. It works for what we're doing, but we'd like to be able to do things like protect mBuffs, protect storage buffers, compartmentalize the kernel, make it into a microkernel. And we're hoping, well, so one of the other things, as I said, early on, 256 bits is a pretty big pointer. There'll be lots of overhead in terms of cache footprint and whatnot. In extraordinarily pointer-heavy benchmarks, it's about 20% performance overhead, which is not acceptable. So we were working on 128 bit compressed capabilities. Some interesting trade-offs we're exploring. We're pretty confident it's going to work, but there are a lot of details. And in our simulations, that should get the overhead down to about 3% in a benchmark that's basically data structures that are pointers. And then we're also looking at non-nibs architectures. So happy to answer any questions. I just threw up on here our timelines. We've been at this about five years now. I am guessing we've got over 50 years of work into it. And we've got quite a bit to go. But it's a fun project. And I think we've done a lot of interesting development and stuff that's generally useful for free BSD along the way. Any questions? Yes? So your communication stuff, are you synthesizing varialog stuff and then running your compiler software on that full stack integration? Yes. So we're actually using BlueSpec System Varialog, which is a Haskell-derived HDL. We both compile it. So it has a mode to compile to a cycle accurate C simulator. So we've been doing it for a long time. More recently, we've started synthesizing bit files, loading them on FPGAs, and then running the operating system on them. And that's hide in Jenkins? Yeah. That's Jenkins. The Jenkins cluster has grown quite a lot over the last year. On the hardware side, one of the RA's is amazingly patient and willing to deal with broken junk. Is that part of your structure open for? No. I don't think so. Parts of it might be in the release. I'd have to go look. If you want to ask me later, I can take a poke at the GitHub repo of what we've actually put out and what's buried in the Jenkins config. You said you're writing out another curriculum you just call in your face. Is that the Cloud ABI work? No, this is not the Cloud ABI work. I probably should have gone to the Cloud ABI talk before, but I talked to it a bit. So what we're doing is adding a new syscall ABI. So like the lib32 and the free BST32 emulation, we're adding a cherry ABI. It's sort of the first cut. Not clear what the long-term right answer is, but that's one we know how to do. So I did a first pass at syscalls.master. The great thing is you take all the lines that say compact, clear them out. We've never shipped with those versions, and we never will. Well, thank you all for coming. If you have any questions later, we'll bring them back to you.
CheriBSD is a fork of FreeBSD to support the CHERI research CPU. We have extended the kernel to provide support for CHERI memory capabilities as well as modifying applications and libraries including tcpdump, libmagic, and libz to take advantage of these capabilities for improved memory safety and compartmentalization. We have also developed custom demo applications and deployment infrastructure for our table demo platform. In this talk I will discuss the challenges facing a long running, public fork of FreeBSD. The challenges I discuss will include keeping up with current, our migration from Perforce to Git and the difficulty--and value--of upstreaming improvements. I will also cover our internal and external release process and the products we produce. CheriBSD targets a research environment, but lessons learned will apply to many environments building products or services on customized versions of FreeBSD.
10.5446/18675 (DOI)
וודב underwater ע Estoyformed ואתה directedNick cant live in Israel Dream Schoolама עליו מיי transportaldog' אאלי直接 לקשור ע descobrir עnderworldichtwatt montrer אני טבע לאитесьpable沒關係 תפרா? איך קליזת, זה מה ה açıkת שבאים וculp ואיך עלcitoyeon? ממש ניממוט finale תלת absurd baker קומואלי Lift נוק Sams어를 vic הקביר רוקי, רוקי הוא עד אמאי עוד קונוויג' אטרנט. אז אם אני already have an ethernet deployment and I still want to use an rdemain, I want to use infinimum, I want to use the perks of rdemain on it, then how do I do it? I use rocky, okay? We're going to see the flow of rocky, we're going to understand how we use it. And the last one, ICER introduction. ICER is actually a storage protocol, okay? It's ICASI on rdemain. So in here we're going to see how everything we talked about so far is reflected and how do we use it and what does it give us. So let's start. We're going to talk about ethernet versus infiniband. I'm not going to talk much about ethernet, just going to mention it as I'm sure all of you are very much familiar with it and know much more about it than me. So we're going to focus more about infiniband and we're going to talk about its key components and we see how we get the whole picture together, okay? We're going to take step by step till we understand how do we work with it and what does it give us. So infiniband, as I said, is a network architecture, is an open stardust, okay? So it's not owned by anyone. Infiniband has low latency under one microsecond and has high bandwidth up to 100 gigabit per port, per second per port. Infiniband has low CPU overhead, okay? And RDA made the thing that I talked about. Also, this is the company that helped us with the low CPU overhead. And it has phabic consolidation and low energy usage. Infiniband can consolidate networking and storage data over a single fabric and one subnet, which significantly lowers the overall power and management overhead that is required. So in fact, I can actually have a fabric of like 10K nodes, okay, in one subnet, and the routing process is also determined in layer 2, okay, between all of these 10K nodes. So in one sentence, we can say Infiniband is a high speed, low CPU overhead, high efficient server and storage interconnect technologies. So a little bit about Infiniband layers. We have our AdNode, the physical layer, the link layer, we have the network layer, transport layer, and of course the ULPs, the upper layer protocols, and the applications. In between, we also have the switches and the routers. Quick look of Infiniband packet. We have here the LRage, which is a local router, okay? We must have an LRage on each Infiniband packet. It's addressed with a lid, okay, and the next one is a GRage, which is a global rout header, okay? We address it with a guide, and the guide is actually its presentation, it's like IPv6, it's 128 bits. Okay? Then we have the base transport header. Here is another header when we keep information about our data, about Infiniband payload that we're sending, the extension header, again, more information about the data, about the sort of send that I'm using, we'll get to it. We have here immediate data, if you want to mention certain data, and the message payload, of course, and at the end, we have our checksums. Okay? So this is an Infiniband packet. Now, just a quick overview of the driver of how does it work. We have our Infiniband link layer, okay? We refer to Infiniband as IB also. So we have our Infiniband link layer. Above that, we have our IB vendors, okay, with our ports and the traffic that goes in and out, and above the vendors, we have our IB core. So let's go through a quick flow of how does it work. I have my application, the application sits above the IB core, okay? It chooses the vendor that it wanted to work with, okay, and then it sends actually request to the IB core. Today, it uses the API of the IB core. Then the IB core gets these requests, and according to the vendor that the application shows, it actually calls the right callback in the right vendor, then the right vendor uses the right function, which goes down to firmware and to link layer, and so on. Okay? So a few more facts about Infiniband. Infiniband has a subnet manager. A subnet manager is actually kind of like the brain of Infiniband, okay? The subnet manager is the brain behind Infiniband, the most important entity responsible for configuring and managing an Infiniband subnet. So like I said before, that we have all the routing process established in layer two, this is done because of the subnet, or the subnet manager actually does it. Okay, so what it does is it does first table, then it denotes port, like we said, the forwarding table, assigns each port in an add node with a local identifier with a lead, okay? So a lead is unique in a subnet, each port can get more than one lead, it can get a range of leads, and in a switch, all of the ports have the same lead, just so you know. And one SM is active at every given moment. So kind of like we all have one brain, and if we had, okay, most of us have one brain, and if we had two, then it will be messy. So also Infiniband, it has one brain, so one SM is active at every given moment. We have a substitute, so whenever SM stops function, then another SM becomes active and take over. A little bit about Infiniband resources. So I'll start with the QP. The QP is actually the actual object that transfers data. It has independent send and receive queues. So we can see here the send queue and the receive queue. So whenever I want to send a packet, whenever I have a send test that I want to send to the hardware, then I use, I put it on a send queue, and wherever I want to receive packet, I receive information, I use the receive queue for it. So inside the QP, I have a send and a receive queue. Next thing is a completion queue. A completion queue also refers to C queue, is a queue that holds information regarding all completed tasks. So whenever the hardware finished with the task that I gave her, it actually let me know on the completion queue. Okay, let's call it for now a new task on the completion queue, and I pull it and then I can know if the task that I gave the hardware was successful and if not, so what was the error, what was wrong with it. Also we have our memory region. So memory region is a virtually continuous memory block that was registered. What do you mean by register? It was prepared and actually pinned to the memory, so it won't be swept out. Every memory region has its access permissions, so I can let remote nodes, I can choose if I want to let remote nodes to write or read to my memory and I give myself also permissions if I can only read from this memory or if I can also write to this memory, so I have actually permissions. I can use the same buffer, by the way, and register it multiple times, each time with different permissions. And also for each buffer that was registered, we get two keys. We get an L key and an R key, and L key is actually a local key, so whenever I want to reach my memory, I need to have this local key, I need to use it, to use the L key in order to do that. And whenever a remote node wants to reach my memory, it has to use a remote key that I gave him. So in order for a remote node to do something, to read or to write data to or from my memory region, it needs to use a remote key and we'll talk about this a bit later when we talk about RDMA. Then you'll see when we use this R key. And if you see here, then we have these blocks that are already written with data and these are on the send queue and what we get in the receive queue, we put in a blank memory blocks, so we filled them with data, actually, that we got. Last thing in here, last resource I want to talk about is the address handler. So an address handler actually describes the path from a local to remote ports. The routing is done based on the information in the address handler. So whenever I want to connect to a remote node, I need to know the path to it, the route, and so I have this address handler which I keep this information in. Now I can save this address handler in QP, in my QP context, or I can save it on the task. We will see when everything is done, why and when it's done. So these are our resources that we will use. Now we have also transport types. Now there are four major transport types in InfiniBand. First is unreliable data, run UD. UD QP can receive and send messages to or from any other UD QP. Reliability is not guaranteed and each message is limited to one packet. So this is kind of like UDP, but in a much lower level. Second one is RC. It's a reliable connected. RC QP is connected to a single RC QP. So reliability is guaranteed, supports operation that needs acknowledgement. So we have with RC QP we have integrity, we have reliability, and we promise that all our packets has arrived, which is kind of like two TCP, in a much lower level. Then we have also DC. This is actually kind of a combination between an RC and a UD. It's actually a new transport type. It supports all of the features provided by RC and hardware reliability, while allowing processes to communicate with any remote process with just one DC QP. So it's a little bit UD and a little bit RC. So if I have this big subnet, like we said, with 10K nodes, or with a lot of nodes, and I want one node to communicate with all the added nodes, then I need one node to have a lot of QPs, which means a lot of memory that he needs to remember, a lot of connection memory that he needs to remember. So that's why we have this DC QP. So each time I can connect to one QP, use it as an RC QP for the reliability, then connect to a new QP, use it also as an RC QP, and I don't need to remember all this connection information. And last one I want to talk about is UC. I'm just going to mention it because we're not going to talk about in this lecture. UC QP connected to a single UC QP. Reliability is not guaranteed, just so you know. But in this lecture, I will focus more about RC QP and UD QP a little bit. I will mention it. When I said before that the address handler, if you remember, sometimes I keep it on a QP context, sometimes I keep it on a task. So when we're talking about RC QP, which is connected to one QP, then I'll keep the path memory, I'll keep my address handler all the information about the remote node, the route to it. I'll keep it on the QP because it's connected to one QP. While when I'm dealing with UD QP, which can connect each time to a different QP, then I will keep this address handler on a certain task. Okay? Now when I'm saying tasks, what are these tasks that I'm talking about? Let's see what I work use. I have a work request, okay? So a task is actually a work request, a work item that the hardware should perform. So whenever I want the hardware to send things, okay? I actually post a work request of send. So where do I post it? On the send queue. So I tell the hardware I want to send the packets, so I post a work request of send on my send queue. Okay? So I have two working queues, actually. Work queue is a queue that contains the work request. I have two work queues. I have a send queue, like we saw on the QP, and I have a receive queue. Okay? And every work request that I have, it's considered as outstanding until the hardware posts a work completion on my completion queue. So when the hardware is done, with what I have xing to do, with the task that I get it, then he posts a work completion. Okay? And then he tells me, like I said before, in the completion queue, if the task was okay, if the work is accomplished or if not, so what was the error, what is wrong, and so on. So I just want to show you a little bit of these op-cords, okay, of work requests, so I have RDMA right here. I have work request send. I have send with immediate RDMA, then a lot of others. If you will work with Infiniband, then you'll probably know a lot of them. I want to go back a little bit to my chart from before. So we said before that the application that sits above the iBecore actually sends it a request. So what are these requests that the application sends? It actually uses verbs, okay, in order to communicate with the iBecore. So what are verbs? Wabs actually is an abstract description of the functionality that is provided for application. It is not an API, and there are several implementations for it. Now, I should have said before, everything that I'm talking about Infiniband, you can find it and of course more in Infiniband spec. Now, in Infiniband spec, if you read the spec, you see that in terms of how I allocate resources or how I send data, everything, I don't have API there. I have, okay, you need to allocate a QP and you need to allocate a completion Q, but it doesn't say how to do it. So over the years, a more firm API was established, and this is the API that I'm going to show you that we use, but just so you know, the spec actually describes verbs, as what should we do. Verbs can be divided into two major groups. We have the control path, which manage the resources, if I want to allocate a QP, like we said, allocate a CQ, and so on, and I have the data path. It uses the resources to send and receive data. So if I want to send something, I need to post a work request, so I'm using the data path, okay? These are examples for verbs. I have to create a Y, create QP, modify QP, if I want, and I have the post send and the post receive. So now that we know more or less what is InfiniBand, I want to talk about RDMA, which we said before is the most important, more important capability of InfiniBand. So RDMA, as we said, is Remote Direct Memory Access. So what does it mean? Like we said, one of the key capabilities of InfiniBand it actually enables data transfer between servers and between server and storage with minimal involvement of the host CPU in the data path. So like my picture over here, it's actually like picking the remote's brain. So I'm picking his brain, I can write two things, I can read from it, but it's without his awareness and he's not aware of what am I doing, so yeah, yeah, it's a... I'm above the law, I can do it. And of course it's... But he lets me do it, as we see. So in order to understand how RDMA works, I want to go over the traditional model for one second. So we're talking here about send. It's just like the classic model, data is read in local side. It's sent over the wire as a message and remote side specify where the message will be saved. And we can see over here, we have the requester and we have the responder. The responder must post a received request over here. So each message must consume one received request that the responder must post. And the received request actually mentions where the data is going to be saved. And then the requester posts send request. Only data passed on the wire. The responder pulls the CQ to see, to see if he has a war completion, like we said before, if it went fine or not. And he sends an acknowledgement if we'll talk about a reliable transfer type. Now, in RDMA, RDMA data transfer module used for other read or write. Local side can write data, like we said, directly to remote side memory. Also local side can read data directly from remote side memory. Remote side isn't aware of any activity, no CPU involvement at remote side. So how is it done? We have again, the requester and the responder. The requester posts send request. Data and remote memory attributes are sent. So like we said before, if I want to read or I want to write data to the remote node, first of all, the remote node must give me an access permission to do it. So when he registered its memory, he must have sent that he can do a remote read or remote write to it. And moreover, it must send me an ARKI. Now, when he sends me the ARKI, of course, it's a regular send, but he can send me one ARKI and one address so I'll know where to reach, one time, and then I can do as much as RDMA reads and writes as I want. So still, I'm sending a CPU. And then again, you can see the responder is totally passive, he doesn't do anything. I'm getting an acknowledgement, okay? The requester pulling the SQ to get an acknowledgement. By the way, if I use a RDMA read, then actually the data that I read is my acknowledgement, because I know I did it if I have data that I read. And the responder again doesn't do a thing. A little bit of performance that we have with InfiniBand. This is, as you can see, in 100 gigabits. So we get to line rate, okay? InfiniBand bandwidth in about 1K. And when we have also InfiniBand latency, like I said before, then we get less than 1 micro-second. 1 micro-second. Okay? Okay. So who knows this one? Roky. So like I said before, now I know what is InfiniBand, I know what is RDMA, I want to use it, but I have already an Ethernet network and an Ethernet deployment, and I don't want to change all of my network in order just to use RDMA. So for that, we have Roky. Roky is RDMA overconverged Ethernet, which means I can use RDMA and I can use InfiniBand and I can use its great performance, but I will use it on Ethernet. So we'll go back to our lovely chart. We used to have InfiniBand link layer. Now, hop, we have Ethernet link layer instead. And if we have Ethernet link layer, we also need to load, we also need EN vendors. Okay? Now, why do we need the EN for? The Ethernet has two major jobs with Roky. First of all, it's configuration of the port. I don't have a subnet manager anymore, I need someone to configure my port, give it a MAC address, expose it and manage it for the minute on, and communication with the network stack for routing purposes, I need to communicate with the network stack. So actually, I need to load my EN model. So when running with Roky, the EN model has to be loaded as well. Okay? This is the effect that you need to remember. Now, Roky packet. So we saw before an InfiniBand packet, which is this packet. I'm not going to go over it again. And now we have Roky version 1 packet. And you can see this is the IB packet without the LRH and the Ethernet header is actually wrapping the InfiniBand packet. Okay? So now, routing is done by the Ethernet header and not by the LRH. So here is the IB and here is the Ethernet header and check someone at the end. Just a quick mention of Roky versions. Now, the version that is implemented into FreeBSD upstream is version 1 of Roky, okay? This one. But we are working on a version 2, which will be implemented soon and will be pushed into upstream in the near future. You can see that in version 2, I also have an IP header, which means that version 2 can be routable. For now, the version which is in FreeBSD is non-routable. Okay? If I do my job quick, then you'll get it soon. If not, so maybe not, but I'll try my best. Now, a little bit of Roky flow. I'm opening a new QP, okay? It can be other, like we said, an RC QP and it can be a UD QP. What the thing I need to do first is actually connecting the QP to another QP. Now, I do it using RGMA-CM. So communication should be established between the connected QPs. In order to connect the QPs, we need to exchange information. We have two ways to do so. We can do it out of bound, for example, with sockets or with a management port, or we can do it by using communication manager, which is the CM. So actually, if we have a requester and a responder, then the requester sends a CM request and then the responseer answers him with information. And it's actually like a few steps in this process and it's represented with events. So we're sending an address-resolve event and a route-resolve event, and each time we get an event, then the process is getting to the end until we get the established event, which means the connection has been established. Now, the required fields are the QP number of the other side. The GID addresses, we said GRH, we address it with a GID. So I need to know my source GID and the destination GID and my source MAC, the destination MAC. So actually, we give the application the destination IP and after the connecting process is done, we have all the information that we need to put in the address handler, the route to the other side, in order to send our packets. These are... I just want to show you an example. These are the events. This is the address-resolved, address error if we have an error, route-resolved until we get the established over here. These are a few of the functions. RDMA-resolve address, these are all in I before, RDMA-resolve IP, address-resolve, and of course, we have a lot more. All of these calls, by the way, is now in SIS, offered drivers, infiniband core. So if you want to go check it out, you can, because it's all in upstream. So I mentioned GIDS before, so I just want to explain what it is very, very shortly. The driver managed actually a table of configured GIDS for each port. The hardware reads the GIDS from the table and uses them when sending a rocky packet. We said in a rocky packet we have a GR rage header, okay? And we need to address it with GIDS. So the driver actually handles a table, and whenever the hardware needs to build the packet, it uses the GIDS over there and put it in the packet. Now, the version that we have now, it's an IP-based GIDS, okay? So the GIDS are based on our IP. So whenever we configure a port with an IP, IPv4, IPv6, we need to convert it into a GIDS, and then the hardware, put it on the table, and then the hardware will know to put it on the packet when it sends it. We continue with our flow, saving routing information, okay? We got these two QP connected. Now we need to save the information. We do it, like we said before, in an RCQP, we save the information on the QP, in the QP contacts, and with the UDQP, we save the information on the work request, because each time I can send the work request, send data to another QP. In this QP, by the way, also we save it on the QP each time, because we save it X, and like an RCQP. We are sending the data. We chose before, if we want to use opcode of RDMA-READ or RDMA-RIGHT, or if we just want to do a regular send, and we put it also in the right places for the hardware to take it for the headers in the packet, and then we're sending the data, according to the opcode that I chose, and of course, put the data also in the memory to be sent. And then the hardware actually builds the packet. It takes all the information that I put in the right fields, and builds the packet and sends it to the other side. The other side receives the ROCKY packet, and according to what was defined earlier as to what version of ROCKY I walk with and what is ROCKY, then the hardware knows how to read the packet and how to read its headers, and take the data and read the data that it wants for it, or if it's already married, and the other side doesn't do anything, of course, like we said before. And we have a connection and a connection and a transfer of data, so these two kids can talk now. That's it. Now, one thing I want to say also about ROCKY, when we use ROCKY, we need to do it on a lossless Ethernet, in order to use ROCKY, we need lossless Ethernet. Of course, I'm sure you all know also better than me, you can do it with pause frames inside a subnet, or we can do it also with ECN across subnet, so we have the way to achieve lossless Ethernet, but we need to remember just that when we use ROCKY, we got to make sure that we do it on a lossless Ethernet. A little bit of performance of ROCKY. Okay, well, this is ROCKY bandwidth. You can see that we get also two line-ranges, 40 gigabits, okay, here. And if we look at this TCP bandwidth, you can see that this is the message size, so you can see that here we get it in around 1K and here in around 2, and this is a ROCKY latency versus TCP latency, and like we said here, we get under one microsecond, and in here, it's about, we reach about 10. And this is the TCP, of course. Okay, so we got to the last subject, and I did it quick. So we know what is InfiniBand, RDMA, and ROCKY. Now I said I'm going to go through ICER, which is a storage protocol, just to understand a little bit how do we use it and what does it give us. So we have our chart as before, we have the InfiniBand Core and the Vendors, and now we have also Ethernet or InfiniBand. I want to start with SCSI. SCSI is a storage protocol, like we said before. It's actually a set of standards for connecting and transferring data between a computer and its devices. It defines protocols and it defines commands, but SCSI is actually a standalone on the machine, so it doesn't use a connection management. So it uses me as a storage protocol on my machine, but if I want to connect to other machines and to exchange information with them, then I cannot use SCSI. For that, I need a management layer. So for that, I have SRP. SRP runs over InfiniBand, over IBCore, and also I have ISCSI. ISCSI, actually, it's another management layer, which runs over TCPIP and over Ethernet. So when I use SCSI, I can actually choose if I want to run it over InfiniBand and use the SRP, or if I want to run it over Ethernet and use the ISCSI. Now I have here ISER in between. So ISER actually extends the use of ISCSI to use also RDMA. So when I'm using ISER, I'm actually using the management layer of ISCSI, so I run over Ethernet, but I use also the RDMA, actually, the IBCore. So it's kind of like Rocky, actually. So ISER, actually, uses Rocky. Now, the alpha version of ISER is currently under review, and it also will be pushed to FreeBSD. And it also added the ISER itself and also added a little bit of code to ISCSI in order to have the choice of choosing whether to run with TCP or with ISER. A little bit of how ISER works. I have the initiator and I have the target. Now, whenever the initiator wants to read or write data to or from the target, what he does is actually sends an ARKI. Okay? And then the target is the one that is using RDMA-read and RDMA-write to the initiator. So if I'm the initiator and I want to write data to the target, then I send the ARKI and I tell him what I want to do. And then the target actually uses RDMA-read and reads the data to his memory without the involvement of the initiator. And again, if I'm the initiator and I want to read data from the target, then I send the ARKI and the target actually uses RDMA-write and write data to the initiator. Okay? And at the end, it actually sends a finish, says that it was done. And a little bit of performance of ISER. So this is the initiator, IOPR second. We have here 16 connections. And you can see we increase the cores and you get to 16. And this is ISER. So with ISER, we get 2000K. While we can see with ISCSI, then we're just about here. Okay? These, by the way, are the cores on the target. So in here, it's only 2 cores on the target, but in here, we use 16 cores. This was message-rate, and this is one connection this is bandwidth. So we can see that now the colors are different. So now the red is ISER. So it's not my fault. So now the ISER also can reach LAN-rate while increasing the block size. And you can see that with ISCSI, we get to about a half of it. Okay? Okay? I'm done. Any questions? Yes. Several slides back when you had, you were showing the stack on the source side. The what? You were showing the stack up on the source side. Just any way to stop this. Yes, right here. In this case, it shows that ISCSI and ISER are both part of the same stack. Well, it's not really according to stacks. I just wanted to see how each of them are using the other one, but it's not... I didn't try to build stacks here. So that wasn't my... Okay, because you had them from the top-fit chart showing them as being, you know, tentatively... No, no, but it's just to show that these are the management layers, and this is the layer... It wasn't my purpose. Yes. Are there any plans to do SRP on your DSJ? SRP on 3BSD? I'm not sure. Are there any plans to do SRP on 3BSD? Actually, we're working on ISER. We're working on ISER, as you know. For now, I guess there are no plans, but if you want, you have my email. I'll show it again. And if there's a request, you can send an email and we can think about it, I guess. Yeah. You said that some of the stuff, version 1, were there any previous things? Where is that? The version 1 part of... Of Wokie? It's inside the Offadstack. So, in Infinity Band drivers... Is it the latest Offadstack? What? It's in the latest Offadstack? Yeah, yeah, yeah. In the latest Offadstack, you have in the core and in Infinity Band. And your plan is to do version 2 as well? Yes, yes, yes. We're just... We're working on it these days. When I'm not here, that's what I'm doing. Yes. Do you want to compare this to iWarp, which seems to be going on similar... I'm not very familiar with the iWarp, so I can't answer this question, but maybe there's someone here that did them both, or maybe he can answer, but I'm not sure, which is better. I love Wokie better, because that's what I do, but I'm not sure. Sorry. So, I meant maybe that iWarp ran well inside the Open. Oh, yeah, you can ask him about iWarp. I think... Ah, that's what you wanted to say? Yeah. Ah, okay. Good. We all want to hear it. Let's talk it through. So, with these different hardware renders, making different things. So, iWarp runs on top of these, maybe to get all the roundability, so that's something good. And it works in today's chaotic internet, but if you don't need a lot of work, then I'm sorry. I'll take another... I'll do another quality on that. Thank you. Does the hardware that you've been on, have to have support for Wokie, or could it just be in the browser? No, you need to have support in hardware, of course. No. How many minutes? You can say. How many minutes? How many minutes? Hardware since Connectic 3 supports Wokie. Wokie version 2 is supported from Wokie, from Connectic 3 Pro. Yeah, but you need to have support in hardware. You need support in the drive, but the hardware from Connectic 3 and up, which means Connectic 3, Connectic 3 Pro, and our newly 100-gig Connectic 4 supports Wokie out of Wokie. Thank you. I know Chilco, Mother of the Mothers, have wrote the support. I didn't know if it was something you could have after a product already existed or anything. You can find it on that. So I'm not sure that... You know, I'm not talking for Chilco, I'm not sure if they have it. I will catch it. Okay. We don't do Wokies. No, I don't. Okay. So that is our... We don't know the people, I don't know. So... All the people that I told them not to ask questions and come later, you can come now. Yeah. Okay, that's it. And thank you.
Introducing a new way to enable high-speed data transfers over an Ethernet network with minimal CPU involvement RDMA (Remote Direct Memory Access) is growing in popularity in Linux and Windows systems as a way to transfer large amounts of data with low latency and minimal involvement from the CPU. However RDMA InfiniBand drivers in FreeBSD were not updated, requiring users to create or port their own implementation of RDMA, and RDMA over Ethernet was not available in FreeBSD. This talk will describe how RDMA works and review the new addition of RoCE (RDMA over Converged Ethernet) network drivers in FreeBSD, allowing easier implementation of rapid data transfers with low CPU utilization over Ethernet and InfiniBand. This also enables the use of iSCSI over RDMA via the iSER (iSCSI Extensions for RDMA) protocol. One of InfiniBand’s valuable capabilities is its support for RDMA (Remote Direct Memory Access) operations across a network, which enable rapid data transfer without involvement of the host CPU in the data path, and data placement to the responder memory without requiring its CPU awareness. RoCE (RDMA over Converged Ethernet) is a standard for RDMA over Ethernet. It provides true RDMA semantics for Ethernet and allows InfiniBand transport applications to work over an Ethernet network. FreeBSD is frequently used for storage purposes and RDMA capability has a high potential of improving performance in such storage applications. A good example for that is iSER (iSCSI Extensions for RDMA), a module being developed nowadays for FreeBSD, which enables the use of iSCSI over RoCE. The main idea of this talk is a short overview of RDMA – Its principles, key components and its main advantages. Additionally, it will cover the use of RoCE - implementation architecture, obstacles we overcame in the development, and a quick browse of RoCE’s different capabilities and milestones.
10.5446/18670 (DOI)
I don't think Google left the light in the error message. I'm going to go ahead and get started. Stay here if you want to hear about free BSD scale out operations. Just a quick shout out to the other limelight folks here. I'm the guy at the top, Kevin Bowling. Sean is here, source committer. And Jason is back there. And Chris is back there as well. Various roles at Limelight on engineering side. And Johannes is a contractor for us doing some cool stuff with stats and the Linux ports. That's actually the more or less the totality of our BSD effort. We've got a couple other people and I'll touch on that in a little bit. So just an introduction to what Limelight is. We are a CDN and this is acute graphic our marketing folks came up with. Basically what we do is put servers close to users. So these are in data centers that are rich with eyeball networks and backhaul. We run our own fiber backbone. This actually differentiates us from most other CDNs which are generally going over internet transit or some type of carriers. If they're putting, for instance, their appliance in an ISP location, they have to backhaul over the ISP's network. So this kind of lets us get over the turbulence of the internet. We can also accelerate non-cashable content via our backbone. We do have some other services aside from content delivery. We do video, so we've got a pretty comprehensive system around that. It's basically like a private YouTube that you can drop into a site. A lot of local news channels, for instance, use this. Let's see. We've also got object storage. This is similar to S3. It's much more targeted to being an origin for our caching service. But people do use that as a generic storage, basically an S3 type of object storage. We've got a DDoS attack mitigation that can either be used with our content delivery products or as a network defense, as long as we can take control of the front end IPs. So as far as numbers go, we're somewhere north of 10 terabits of egress at this point of actual bandwidth. And that's peering, transit, paid peering. So we're pretty big in the CDN market. We're generally between one and three depending. Well, I don't think we've ever been one, but number two or three depending on the time of year. And we have somewhere north of 100 data centers. So again, these are just pops in large metro areas with lots of fiber and hopefully lots of eyeball networks. So a pop looks pretty not, you know, there's not a lot going on inside of them in terms of like the equipment. We've got DWM gear. This basically runs a local fiber loop between generally we don't go into one data center in a metro area. We'll have two or three. The DWM gear lets us, you know, over a single pair of fibers cram like 10, 10 gigabit lines. So that creates a loop between the, we basically treat all of those data centers as one point of presence. And we do get a little bit of redundancy out of that, but that's how that works. At the actual data centers, we have a pair of generally the largest routers you can get from somebody like Brocade with a full route table. And this is what our peers are coming into and our transit. Behind that, we'll either have a couple or more large chassis switches. These look just like the routers are like half three quarters of a rack with tons and tons of 10 gig ports going out to the systems. Or we're pulling 40 off to a spine network. Generally we're using a Rista 40 gig switches here and those will go to top of rack switches. There are pros and cons to both approaches. Price usually dictates which we do as well as the size of the pop. Then we've got a ton of servers that look just like this. A lot of people use Supermicro. We're in that camp. We generally throw one CPU into these. You know, this is good for free BSD because we don't have NUMA problems. You know, there's no, there's just a single NUMA node. We're using all SSDs at this point on these edge boxes. We've used some Samsung. I think we've evaluated micron as well. So all of those bays will be generally 480s at this point. We're looking at going up to terabyte class SSDs because that affects our cache retention time, which lets us for long tail content we can get faster throughput the more space we have. So on the back of this thing, it's actually two servers in the 2U. The reason we do this is we get four extra drives in the 2U versus one U servers. It does cause some problems with asset management. We've mostly worked that out, but for instance, if you pull one of those nodes and put a new one in, how do you handle that? It's a pain. But it's worth it for the four extra drives. So on the back, generally we've got, at this point, we're using Intel 10 gig fiber ethernet. That drops into this little guy right here. We're trying to work with Chelsea right now and see if we can get a Chelsea board to go into this thing, because if you don't populate the second CPU socket on these super micro boards, you don't get to use these, unfortunately. Sure. I don't track that. Nobody on my team does either work a little bit higher level than that, but I would assume so. We're trying to get more and more efficient, so that will be part of the effort. But at this point, it's purely performance driven. We can do so much more with the SSDs. It does, but SSDs have dropped to the point where they're big enough and cheap enough that it doesn't matter. We're in Colos. We only have a couple of our own data centers, so we don't care too much about that as long as the data center does a good job. So again, the point of this talk, what actually motivated me to do this was, a lot of people talk about embedded use. There's a lot of appliance vendors talking about free BSD, but I haven't seen a lot of people talking about large scale installations, and there are a few of those out there. So I want to just show you what we do, and hopefully people can learn or be motivated to come and talk about their own stuff. So the main difference between an ops type of workload and an appliance workload is the systems are very fluid. These things are changing quite regularly in terms of software and in terms of configuration. We're pushing configuration several times a day, either for customer turnips or to test new packages or whatever the case may be. And this is very common. This is all of the hot stuff you see at startups and whatnot. This is like large websites, API centric companies, and service providers. They're all in this category of ops, I would say. And with that, the workload is basically internet facing. You don't have, you know, we're not like a storage appliance that has to have 100% availability because a ton of servers are hanging off of it. We've got lots of cheap nodes, and we can kind of deal with failure in different ways. So this is more or less the about me. I think it's kind of important before we get to the other slides. I was a Linux guy for 10 plus years and very deep into that culture. And, you know, although I was doing that professionally, I kind of played around with other operating systems. I ran monowall when I was, you know, still in high school and that was a thing. Switched to PF sense when that started gaining traction and I would play around with other OSes just for fun. I'm kind of curious about, you know, the design tradeoffs and why people do things. I also like old hardware, kind of played a role with those ones at the end. So I started Limelight Networks and I'm intrigued by the BSD edge because this is like our bread and butter. There's over 10,000 machines and there's not a lot of people doing anything to make that happen. I'm curious because like on the Linux side, either at Limelight or other companies I've been at, there's a ton of people per, you know, whatever measurement you want to use per, you know, X number of servers. At Limelight that wasn't the case. There was maybe a handful of people really involved in the design and implementation of the CDN. And that kind of piqued my interest and got me going on this stuff. And, you know, when I started digging, what I found was this BSD software and mindset were really responsible for that and that sucked me in. So I'll try and explain more of that in my talk as I'm talking about some of the tools we use and hopefully that makes little sense, more sense. But one motif to keep in the back of your mind when I'm doing this, observability trumps everything else. And this is kind of stolen, I think, from Brennan Gregg. He meant it, I think, in the context of, you know, tracing and figuring out how software works. But I've actually, I think it's even deeper than that. We were talking about how BSD pulls you into the source tree and you, for instance, know how your compiler, at least what it is and what it's, you know, calling out in terms of other utilities last night. In the base system, you know, you know what's part of your distribution, it's not just this substrate that you're trying to fire up JVMs on top of and be done with it. You actually kind of get involved in your operating system. So I'll dive into some of our tool choice. These are pretty airy slides, so feel free to interrupt me. But we use Xabix. We're generally happy with it. It was somewhat hard to scale because it uses a relational database to keep track of all these incoming values. So the answer to that was Fusion I.O. We run MySQL on top of Fusion I.O. And it works well enough for the current workload. The key insight here, though, aside, I mean, I wouldn't necessarily say use Xabix unless you're a smaller medium shop. It's a little bit pushing it for what we're doing. But use an API-driven monitoring system. There's a couple out there, or more than that, but make sure that the way you're interacting with your monitoring system isn't like writing config files manually. You want to be pushing configuration into this. And that should ideally be part of your configuration management toolbox. And I'll get to that when I talk about salt. Operationally, monitoring has to be part of your entry into production. If you have people putting stuff, customer-facing up without monitoring, you're going to have a bad time. I mean, you're going to have problems, and there's going to be this fire drill, and then you're going to wonder why you didn't do that to begin with. This is something we've learned a few times over. I think we've gotten a little bit better at it recently. And then the other thing where we want to go is getting monitoring as part of our testing in QA. A lot of people write QA toolkits or what have you to run unit tests or integration tests. But when you're doing ops, you actually need to think beyond just the piece of software. You need to think about how it's deployed and how it integrates with other microservices or databases, whatever the case may be. The answer that we think is plugging into monitoring, that's what's going to tell you when something's wrong in production. If you can catch those errors as part of QA, then you have a nice little feedback loop. And just as part of this, don't use Nagios anymore. It's not very good. We can do better than that as an industry. So opposite of monitoring is metrics. And this is more or less time series data coming into some type of scalable database. We have TSDB in place right now. I'm not really happy with it. I was involved in trying to un-FSCK it a few times and didn't get very far. But I liked, I was talking to Sean Chitenden, a Groupon guy here at BSDCan. He's like, so basically what you have is a metric dumping ground. We have something that's easy to put a ton of data into and not really anything to get good stuff out of it. So I think there are better answers here. One of the things we've been experimenting with is a startup called Jet. It's kind of a hybrid hosted on-site application. This guy in the back, Chris, can tell you all about it if you're interested. But it's actually pretty cool. It's a data flow language, which is something that's been around, data flow programming has been around for a long time, but they kind of put it right here in your face. So if you've ever used Splunk, it's just next level beyond that. So for instance, you can query some type of, for instance, here they're showing, querying an asset database. And basically the question was using these metrics like our average response time and our kilobits per second, how can we see how our different hardware models are influencing that? So in this example, this particular device is doing quite a bit better than these other devices. And somebody looking at this could make a case to say, well, we should deploy a lot of these and deprecate these because that wins us business or whatever. So metrics is a pretty important thing for making decisions at scale. I can talk a lot more about this or I can move on if anybody's interested. So basically what we're trying to do, we feed a ton of just stats coming off a server. So our main ingest is a program called CollectD. It's just a C agent with plugins. And this is looking at things like your CPU usage, load average, G-stat on FreeBSD, memory. And then we try and get application metrics too. This requires the application developers to get involved, but they can push up things like transactions per second or average, some type of percentile response or things like that. Once we get it into one of these systems and we can query it, this is actually the bare bones TSTB interface. There are some better ones, Grafana. But basically then what you do is try and correlate things. So you can say, this is actually a brilliant example. It's like, can I correlate server model to response time? But maybe I want to look at backbone saturation to response time or swap in versus response time, things like that. When you have the data, you can start asking questions. And you can, with them in a scalable database, you can ask them post facto. You don't lose that after an incident. You can go back and say, why did we do something wrong or imperfectly there? Yes? You mentioned while you have a thing up there about how my company is just starting to look at that. Do you just actually touch on what you're thought of? Sure. So I said it's not quite metrics because basically both of these things are taking in log data. For instance, you're pushing in syslog or app logs. Then they have indexers that can put that into an efficient structure so you can query it and roll it up into different things. A lot of times you can turn that back into metrics. So for instance, we can use Splunk to get metrics off of like an access log or something. Elk is more or less equivalent to Splunk, just open source. The other thing you can do here is just query. If you are looking for, for instance, a panic or something that's coming off of syslog, you can go into Splunk and try and make inductions based off of that. Try and correlate things, kernel version or things like that. Does that help? So that is kind of more or less actual, like a NaviOS replacement. Not for alerting but in terms of getting metrics off the syslog. These two are more textual. They're really, it's how do you deal with logs at scale? A person can't go view the syslog output of 10,000 servers. It's just overwhelming. So what you try and do is get it all, you know, get it into here and then look for anomalies or create, you know, canned searches that know certain bad conditions, things like that. It's not, you know, you can use that to then feed an alarm into your monitoring system. But by itself, it's very freeform. It's just a, it's like a search index for text. I'll go ahead and move on then. So this is something we've invested a lot of work into in the past year. We were a CF engine to shop. And then we had some chef through acquisitions. But we did kind of a bake off and we looked at what was out there and what would work for our implementation and we found salt. And we've been pretty pleased with this decision. The key insight with salt is that you have configuration management built on top of an orchestration bus. So rather than running your CM system on a scheduler or a cron, you actually have agents permanently running on the systems and then they're always connected to these master systems. So this is kind of interesting. You can react to different events. For instance, when CM runs on one system and something changes that can push something over the bus and make something else happen. For instance, add a host to a load balancer or something in real time. You don't have to do this on on synchronous schedules. So I gave a talk at SaltConf where we go really deep into how we deal with changes to the CM system itself. We basically have a workflow where we have a steady state CM and then when somebody wants to change that policy, we spin up a new salt master in a container and then let them point their machines to that and verify it in a sandbox environment or even in production for certain changes. And when that's ready, that's then accepted and promoted into that steady state. This has been pretty cool. So basically what you're trying to do with configuration management, if this is new to you, is move system state from something like shell scripts or interactive input into declarations. You want to describe what a machine is supposed to do rather than step by step how it is to do it and then let the system figure out what's changed or what needs to be changed and what order it needs to happen and to make it do a thing. So basically policy is greater than implementation with configuration management. With Salt or with most systems, you can do things programmatically when you need to. One of the key insights is you kind of want to build those programmatic structures up so then you can use them in your declarations and Salt makes this really easy. This is a state that deploys network time or NTPD and using a map file this works on our free BSD hosts, our red hat hosts and our Ubuntu hosts. So that's kind of what you can do with CM. You can abstract things out a little bit and make it easy to understand what a host is doing at an abstract level. The other thing we get with Salt is this orchestration bus. So a kind of neat example we had recently, we ran into some weirdness in the TCP stack where we have a customer with a very bad network that's sending out of order packets in the initial burst. And then it's actually sending acts left of the window and there's actually an RFC that none of us knew about where this is supposed to be a good thing. So we found basically we wanted to see how prevalent this was in production to gauge the severity. So we wrote a detrace script and actually ran this on 2000 production machines and just watched a counter for 10 minutes. And we found out it's actually very, very rare. So that helped us kind of triage a bug from oh wow this is, we better get a handle on this real quick to okay, we can take our time and figure out what's actually going on here. And how do we want to fix that? Should I pause here on Salt? Do you have any questions or comments? Yes we do. So we've got, I'm trying to think of a good example. So we sync SSH keys out to the edge. This is just one I wrote so it's on the top of my head. To do that we just, the module goes and makes a LDAP query for the SSH attribute in the directory and then pumps that to the master. Then the master can use the salt file server to push that out to our edge nodes. It's just a way we log into our systems. We've also written modules to do like different services. I can't think of, one of them is actually this workflow like how this thing spins up containers. That's a module. A couple screen pools at most. It's easy. I'm really pleased with Salt. Everything's pretty straightforward. The docs are a little bit hard to get started. But once you kind of grok it, it's pretty easy to keep going. So just really quickly, if Salt has a constant connection back to the master, what kind of load does that put on your network? Very little. Very, very little. It's zero MQ underneath right now and they're actually working to get, make an even more optimized transport. But like as far as bandwidth, there's no noticeable. I mean, I think the machine's been up for with these, with a large client count and it's done like 100 gigs over a couple months of. How many clients do you have? So we have like 1600 servers. I'm just trying to get an action. So we've got one master, just a single master right now with a total of like 2000 hosts on it. Then we've got a couple other pools, but and that's doing fine. Like that's handling all the encryption and everything. You'll see the CPU spike a little bit. Like you don't want to skimp on hardware there, but it's for that, I think you'll be all right. What's your last question? What's mastered approximately for hardware and specs? We've got dual. So we went dual CPU, whatever the current generation is, like a core, dual a core and then like 100 RAM actually didn't matter, but we've got like 128 in there. We also did SSDs just because we didn't need a lot of space and they're affordable for us. Do you evaluate Ansible as well? Sure. So I looked into Ansible on my own. We didn't consider it for work. We looked into Chef, CF Engine 3 and Puppet, aside from Salt. What I saw in Ansible was a lot of the same thing, but it didn't do the bus. Thing that we really like. Like that's kind of a key insight to us. I think it's a great configuration management system and it's really easy to get started. Their docs are fantastic. It just seemed to me that Salt was a better fit for what we wanted to do. So if you had to see the event in 2, why was the event in 3 kind of discounted? We didn't see really a tremendous gain from 2 to 3. What we wanted was easy templating like this. That would actually be a lot more stuff in CF Engine 3. And then the ease of writing custom modules. We've got a lot of people that know Python to varying degrees. So the entry to changing both the server and the client are pretty low. Most of us that have worked on the Salt implementation have actually been contributing patches, so we've got a lot of people that have tried by patches to the Salt upstream. You just go in and do it and you're done for the day. You don't have to have like a huge learning curve. I think that's a key win actually over some of the other systems where they've started bifurcating the agents in Ruby and now you've got a closure server or whatever. It starts making things harder for casual development. Go ahead and move on. So how do we actually get FreeBSD onto our edge machines? This has changed in recent times. We're trying to get a little bit more formal with it because we've got source committers on staff now and we're starting to do more interesting stuff. So basically we use Git at Limelight as our version control system. So we're using the there's a semi-official GitHub mirror of the FreeBSD SVN tree. So we have two branches. We have head and stable and these follow SVN head and currently 10 stable. And what we're doing here is taking, we're deploying 10 stable, but we develop against head because we want to kind of stay ahead of the curve and make sure that what we're doing is going to be fine when the next release comes along. So we take these two branches and we grind them through a Jenkins job. This produces our images that actually go out to the edge. And I'll kind of go off on a tangent here. We have this vagrant thing that this is actually part of our salt deployment. We're taking these images and pushing them out as vagrant boxes so developers can run this stuff on their laptop. The insight here is we want them to have a very low barrier to entry to writing configuration management and working with our actual production images. You know, if you're developing against a vanilla image that might not have all of our customizations maybe you don't run into a problem early enough and it becomes a problem, that kind of thing. So with vagrant we're able to actually get very low barrier to entry to our very production looking environments. And this is all a big feedback loop. Packer is a thing that we use to make those box files. It's a little bit more important in the Linux world because we just have these ISO images that we have to enhance with our changes to packages and config, but in FreeBSD we've got the build system so we can do whatever we need to there. I'll go more into our source stuff in a bit. So phase two after I'd been at Limelight for a little over a year, what I kind of saw was that this BSD stuff was awesome and we needed to do more of it. We needed to be deliberate about it. So we brought on Sean and that's been awesome. He's been helping us upstream all the things. So we had a stack of patches, not a huge list, not like some of the appliance people, but enough to try and get that stuff either fixed upstream or at least reported upstream so it could be fixed in a perhaps better way. And we're trying to get better about how we actually use the ports tree and build packages. This is an ongoing thing, but the key here is Podrare and PackageNG. These are really awesome. I think they're kind of the best software packaging experience that I've seen on any operating system to date. And again, this is all about just being very deliberate about what we're doing. A lot of things that, you know, up into this time were done just because they had to get done and now we're trying to take a look at it and say, okay, here's how we should do it going forward and we'll be, you know, more efficient and better. So how did we start a source team? So for instance, I found Sean on the jobs bailing list. This is pretty low volume, but you can either post, you know, your resume there or post a rec there. You can come to conferences like this and look for people that are doing stuff. And of course, if you do cool stuff sensibly, generally people will come to you. We're trying to do that. I hope we're getting better at that. But there's plenty of people using BSD that are doing that. So the benefits of starting the source team were we were on free BSD 8 when we started and, you know, 9 had come out, 10 had come out. And getting from 8 to 10 was actually a lot more involved than we thought. Even with this small patch stack as an operator, it was quite a bit of work. Both because of, you know, there were actually bugs in the 10, 10 and 10 one release that we've had to work through. And then we have a binary blob that we actually deployed to production. We bought a pluggable congestion control algorithm before that was a thing in free BSD that does some network magic. So we had to kind of figure out how we could keep the interface consistent so we could keep using that in the 10 life cycle while we figure out what we want to keep from that and implement ourselves where we can as source changes. Some of the other things we've done. Sean worked on this multi queue EM driver. The EM driver is like a gigabit class Ethernet controller from Intel. It only uses one NICU. And what we saw was that a lot of our machines were actually kind of stuck in the TCP path. So what he found through reading some of the ARC manuals was that you could split this out to at least two queues on some of the chips. And then now we can get, you know, two or more cores. I think two to four cores doing that TCP output path. And this actually got us with the two link aggs from like 1.1 gigabits reliably. Now we can more or less max those two interfaces out. So that was a really nice thing. We also started doing some profiling with detrace and PMC stat. And we found that we're paying actually a pretty hefty IPFW penalty on our outbound path and we don't have any outbound rules. This is because the by default, even if you don't have any rules, there's a accept rule. And then you have a bunch of set up and tear down with IPFW. So here in added, I think it was like a two line change and we'll probably try and push this upstream if people want it. Just assist control to say ignore any IPFW overhead on the outbound path. And we got an appreciable gain out of that as well. Sean did this PLMTUD implementation. This was basically if people are blocking ICMP traffic. It's PMTUD as an actually paid and it's the black hole implementation. So do you want to? Oh, it makes the network not stop when somebody is blocking ICMP. So this was something that I think, I don't know if it was a customer request or something that we just noticed in production. But that was a cool thing that we got knocked out. CalloutNG, this was really fun for some value of fun. The callout system was broken up through 10.1 release. And you don't actually notice this on, you know, if you're running a small fleet of systems, the panics that you'll see from this are rare enough. But when we had such a large number of machines, we could actually daily see machines panicking. So this was, we didn't actually develop the fix, but we were kind of following along in the review and poking people and testing the patches. So we think this is fixed in what will, 10 stable, what will become 10-2. So that was actually quite a bit of work just figuring that out. And again, Sean and Jason were key in doing that. We're looking into TCP customization. A lot of this will go upstream where we can, but some of it might be where we're kind of deviating from the spec or whatever. And then we're also doing MFCs of stuff sometimes early or sometimes if somebody can commit something to current. And if they don't, for whatever reason, want to MFC it, we'll pull it back on the upstream project. So some of the insights of working with source, we want to always develop against head. We don't want to get into this situation that other vendors have gotten in where they're married to a release. Then they have to do this huge drill to get back to the current release. We want to know what's changing in head while it's changing so we can influence that. And kind of sound the alarm or hopefully prevent problems from happening. So this is our LL head branch. Then we pull those changes back to our LL stable, which is following 10 stable. When we're ready to ship this, we do an internal release engineering process. Basically, this is running our build job, doing some smoke tests, and then deploying it to Canary host. And then finally, we'll release this to our systems over a longer period of time. So again, one thing I'm kind of reiterating here is these feedback loops. This thing called the OODA loop is kind of an interesting way to think about it. Basically, it's like observe, orient, decide, act. So we want to kind of see what's changing, get ready, you know, position either the people or the machines to do what they need to do. Do the work and then make sure what we did is effective. That's all we're doing on a lot of this stuff, either in operations or in development. So where I'm at now, what I want to do is kind of identify and support key features and the community at large. So there's a couple ways we're trying to do this. We're trying to kind of look out and see what features and free BSD we want to either, you know, push an agenda on or push our resources to implement. We want to support the community with, you know, finances. So we've made a donation to the Free BSD Foundation. Internally, we want to show the company that, you know, we're doing good work, that our BSD people are effective. And I think we're doing a good job of that. You know, we've got a relatively small number of people versus the footprint and the impact of these systems. We want to bring other people in the company into that fold and help them use these tools to do the same. And, you know, how we'll do that, we want to empower service owners to do cool stuff. You know, the base system, again, has it's incredibly observable. You can, you know, kind of figure out what it's doing and how you assemble it to make your, whatever you're trying to actually do, be efficient. Podre and Package are huge for developers, you know, when you're pulling in libraries or whatever. You don't get stuck on ancient versions or whatever. You have a ton of control in figuring out how you want to manage your dependencies and your programming language environments. And then SaltStack's also been massive. This is something we want to push as a self-service out to the groups that are doing product development. So that kind of, those four things are where we're at today. Where I'd like to go is really kind of around Jails and IOCage. This is kind of stuff I've been playing around with on my own time. But what I think would be cool is to kind of detach the metal OS from the user land. So as a source team, we can start evolving this stuff that's touching the hardware faster than the product guys can validate their own changes. The reason we want to do that is, you know, we're trying to test and minimize the amount of releases we have in production. So when we're doing driver work or whatever, these guys don't care too much about that. They just need it to work. But we need to keep their ABI compatible and everything. So for instance, I, you know, I can envision in the near future, you know, in the next year or so, we'll want to start deploying 11 to production. And if we can do that without, you know, rebaking all of this user land stuff, that might be interesting for a migration period. And, you know, you can support that for a couple years or whatever. ZFS is kind of instrumental to the Jails thing. You want to be able to push Jails around to work around hardware problems or, you know, data center migrations, things like that. So I already mentioned this. We, this was actually a lot of work, you know, in a corporate environment, you have to figure out how you can make people understand a good idea is like a good idea. Luckily, we had a founding engineer at the company that was able to help us kind of make that case and get our, get our name up here. So that's the end of my deck. And just the one thing I want to say is, you know, don't be afraid to push VSD to production in these type of roles. It's fine. A lot of people are doing it. There's plenty of resources out there, plenty of mailing lists and things that you can go to to reach out for help. And if you're doing this, I hope to see more talks about stuff like this, because I think it's an important market segment that we're kind of quiet about right now. Thank you.
In this talk, we'll look at Limelight's global CDN architecture and the practice of large scale web operations with FreeBSD. We'll investigate how FreeBSD makes these tasks easier and the strategies and tools we've developed to run our operations. We'll then look at why the engineering team chose SaltStack to further improve our operations capabilities and reduce deployment and fault handling times. Finally, we'll finish up with an overview of metrics and monitoring at scale with Zabbix and OpenTSDB. Limelight Networks is one of the "Big Three" CDNs and runs its edge using FreeBSD.
10.5446/18669 (DOI)
general Q and A if we got time. So the biggest thing running on BSD is our caching software, which is a CE application. Then there's a bunch of maintenance software, a lot of it's Perl that does things like shipping logs off to be processed and other configuration changes. We do have some other stuff in the works. Trying to think, most of it's C actually on our BSD edge. We don't have anything else going on at the moment. Sure, go ahead. Security sensing threat modeling. That's a very deep question I think. So we've got, the first thing is we want to keep our boxes patched and as the BST team, we do a better job of this than anybody else, cause we've got this release process and we've got the PODREER stuff, or at least support stuff on our older systems right now. We can audit our systems for volans and make sure that they're patched. A lot of the monitoring stuff is, you know, syslogs. So we're getting those logs shipped off of the machine so they can't be tampered with, the central aggregation hosts and then a lot of it's probably human right now, but we do have trip wires and stuff in place with configuration management and with other systems. How do you manage picking your OSs up today? What are you guys doing? As far as deployment or? Just what's the life cycle like, you know, if they're the phone umbrella that goes on the previous BSD base system, what do you do? Yeah, so that's actually why we have this big feedback loop. We wanna make that not such a big deal. So we'll, for instance, pull the MFS or whatever with the VoneFix to our LL stable branch. We'll do a build and then we'll re-image all of our machines. We don't use free BSD update or anything like that. It's all image based for this stuff. So would you consider using a ZFS boot environment to help with that? Yeah, I didn't touch on it in the presentation, but we're using UFS on these machines because we're doing fail in place. If you've ever seen one of the Netflix talks, our boxes look just like that. You know, we've got hard drives with single partitions, no rate or anything. When one of those drops, we just leave it dead for a long time. ZFS would be interesting for some other use cases where we start moving some of our other applications onto free BSD. And I've used Vood environments, I think they're awesome. It saved me on my laptop from broken updates, so it does the same thing in those types of roles. You're deploying IPv6 in one, is that your role? Yes, so all of these edge boxes are running dual stack. I don't know a lot about, did Jason run out of here? Are we using the route? How do they get their IPv6 address? Oh, all right, we're out of route. It's all I do about IPv6, not too much, but I'll have to add the ratio. Yeah, it's per machine, it's actually a pretty small ratio for us today, but we do support it. So how are you actually shipping your logs to most of the devices? I think we use our syslog on free BSD and those are just going to aggregators in each of our pops. Great, and it's like Ralph over that or? I don't know what actually what happens after that, I'm not involved with those machines. It's just a line in our config that we get it off of them. Yeah, I don't know off the top of my head. Anything else? Cool, thank you guys.
In this talk, we'll look at Limelight's global CDN architecture and the practice of large scale web operations with FreeBSD. We'll investigate how FreeBSD makes these tasks easier and the strategies and tools we've developed to run our operations. We'll then look at why the engineering team chose SaltStack to further improve our operations capabilities and reduce deployment and fault handling times. Finally, we'll finish up with an overview of metrics and monitoring at scale with Zabbix and OpenTSDB. Limelight Networks is one of the "Big Three" CDNs and runs its edge using FreeBSD.
10.5446/18667 (DOI)
So I think I kind of introduced me. I'm Reich. I'm a developer in the OpenBSD project for more than 10 years now. I mostly like to work in the networking area and that's a lot of stuff there. I think I have one comment in X as well. So yeah, actually for a living I'm running a company that does networking with OpenBSD. But I didn't start working on OpenBSD because of the company. It was the other way around and so I'm in a lucky position that I can do what I like. As my work and we have a team of a few people who also work in OpenBSD. So that's a fun part of it. But of course we also have to deal with customers and requests that are not really identical to the requests you have in the open source world. So today I want to talk about HTTPD. It is still fairly new. It showed up about a year ago. And it's the new web server in OpenBSD. HTTPD is included in OpenBSD since the 5.6 release. It was started just two weeks before the 5.6 release was finished. And we decided let's get it in because it's very new so it doesn't harm. So we had it in 5.6 but then it really matured in 5.7 which is relatively new. 5.7 was released in May. You have this nice Blues Brothers theme in 5.7. So buy CDs, go online, have a look where you can order it. That's supporting the OpenBSD project. So why do we need a web server and an all-base system? Actually OpenBSD has a website and we want to serve the OpenBSD page which is in a very nice 1990s HTML layout still. But we do need a web server for it to provide this page. We also have mirrors for the packages, the ISO images and so on. And some of them actually already switched to HTTPD because some of them are hosted in OpenBSD as well. Not all of the, hang on, people are tweeting me so I have to turn this off. Not all of the OpenBSD mirrors are running on OpenBSD but actually many of them. So we do have a need for a web server in OpenBSD. But users maybe also want to use, set up OpenBSD and serve their own cat page so they can just install OpenBSD, run HTTPD and put their cat pictures there. This is a real page that I found just by Googling the cat gift page. I think it's very nice. And of course we want to serve it securely that nobody breaks and then put doc pictures there or something like that. We do have a looking glass for BGPD and our base system. It's a simple CGI that I wrote some time ago and so it's not enabled by default but it's shipped with every OpenBSD release. So just to provide a starting point, some exchange points are running BGPD and they're conveniently want to provide a looking glass usually to see what's going on, to do lookups and so on. And for that we need a web server actually. Otherwise we would have to move this into ports but I like to have things on the base system actually. I rarely use ports except for like window manager and all that and the browser but for the networking tools it's nice to have this in the base system in OpenBSD. So OpenBSD has a long history of web servers in the base system and the web server changed a few times. So I give you a brief history. In 1998 OpenBSD introduced or imported Apache based on 1.3 release series I think or was it even 1.1 now? I think 1.3, I think Bob back did it. So OpenBSD 2.3 is like a long time ago. It's very close to the foundation of OpenBSD which happened I think in 95. We're going to have our 20th birthdays this year with the upcoming 5.8 release. So almost in the very beginning of OpenBSD we imported a web server. Apache 1.3 became old and we could not go to Apache 2 because Apache 2 has this Apache 2 license which does not fit in our licensing. It has some weird requirement that would not work in OpenBSD. So we kept using Apache 1.3 and it became a fork. Mostly Henning Rower cleaned up the Apache 1.3 and OpenBSD. He threw out stuff like apptick support or VMS or something like that. And we had it hardened like doing change root by default and a few other things. So the OpenBSD Apache was quite different from the upstream version. In 2011 some people decided that EngineX is a cool thing now but actually Apache was getting very old and there weren't any other requirements under a BSD license that were like small and nicely designed and EngineX was imported at this time. And then it took a while, March 2014 actually last year when Apache was removed and EngineX became the new default web server in OpenBSD. So last year in Ljubljana in Slovenia we had a general hackathon. It really surprises me right now that it was last August because it was far away. But anyway, so we were on this hackathon and we looked at the code base to replace a few things to improve the security of our software in the base tree to use like better memory allocation and many other things, I'll give more examples later. And I looked at EngineX and it was not really easy to adopt or changes to EngineX without creating a big patch for it. So somehow I got frustrated and said, well, I wrote Relady. So Relady is almost a web server because it has some HTTP support and it does all this asynchronous I.O., which is the nice part of EngineX and Relady is doing this for a long time as well. So I sat down one day and stripped down Relady, renamed the directory and removed everything that is not needed like the health checking and so on and added support for serving files and at the same day I had a web server. And so it happened that we decided to use it instead of EngineX. So EngineX had a very short time in OpenBSD actually. So in Japan I had a title like Security Schokunen, but I think here I'm using a German term like Sicherheitshandwerkskunst, which basically means security craftsmanship in German. As I heard that you like all long words. So we constantly improve our code base for security and quality. That's the nice thing in OpenBSD. It's not just like a graveyard of code. Something that is in the base system is something that's supposed to be reviewed and modified to have like a common thing. If we introduce a new security API or the allocator or somewhere else, we go through the tree and adopt it everywhere. So all the time. And then last year all these things like Heartbleed and Shellshock happened. And one response was to create the LibreSSL fork basically. I was kind of involved in that. I was a messenger. I talked to Cia and Cia said, yeah, sure, convince the people to do it. I did and then it happened. So I convinced other developers actually I'm not so active in the development of LibreSSL, but at least I had the messenger role and I'm still alive. So in the reaction to that we also introduced like realloc array, for example, that's one thing that is supposed to reply is unsafe array allocations where you do like you want to allocate an array and then you write in, for example, C alloc with n times m in it. And these array allocations are possibly vulnerable to overflows and realloc array is a new function. We have an open base that does the bounce shaking internally. So if you allocate an array that none of the values would overflow the integer basically. So it's a protection against some attacks that happened. And I tried to adopt this to nginx because nginx allocates pools and arrays all over the place and they just assume that the kernel will always give you like values that cannot overflow and like these, it is safe. We can just safely assume it is something we don't really like to do. We want to explicitly check is there an overflow or not and not saying, oh, this cannot happen. So I tried to apply it to nginx and the diff got big and we couldn't get it upstream. So we did not want to maintain it and open BST ourselves. So it just, I throw away the work and we intended to use nginx as it is. So that's one tweet I wrote the next day after I wrote HTTP and at the same day, well, very late the day Bob Beck and C.O. DeRod gave me some beer and said, okay, can you import this web server into open BST? I was scared. I mean, it was just new and so everyone knows that as a developer, writing a web server is like what everyone does who learns programming language. Web server is like the hello world of networking tools. So you don't really do it. So I wrote the server and then suddenly C.O. and Bob are pushing me to get it in the tree and the beer helped. So next day I woke up and realized that I had committed a web server. So in the beginning we had HTTP. It was not yet enabled. I worked on it for like two weeks in an insane run basically, just me. After the hackathon I went home and I didn't do any other work and my family didn't really see me. So I had this two weeks when I got the web server in a state that it was usable. There were some issues still, but it was usable for basic setups already. And so C.O. said, okay, so we enable it. So first we import stuff in the tree, but it's not linked to the build. It sits there with a make file, but then when enabling it, it gets compiled and it becomes part of the snapshots and releases. So in 5.6 it showed up, actually. We had TLS support contributed by Joel saying the basic file serving and fast CGI was contributed by Florian Opsa, everything within these two weeks. But of course we continued working on it. This is not the current state. So the design, simplicity is the goal. HTTP is designed to be a simple and secure web server. Maybe these days everyone claims to be secure and simple, but then I did some research looking at other servers and none of them really satisfied me. So it's not that I really wanted to write my own. It's like the frustration with others. Engine X, for example, started fairly small, but more features got added over the time and vendors and all these influence. So it's not simple anymore. It's quite big. And other ones are even like these, it's not simple anymore. It's not light anymore. So HTTP should remain simple. Have the basic task to serve static files, do fast CGI for dynamic content, do proper TLS securely, and some other core features should be built in, directory listing, of course, logging, basic authentication. So the current code is 11K. That's from current, actually. Can you read this or is it too light? I don't know. So the different files, including the documentation and the other man page and the make file, so it's not big, actually. The task was not to write the smallest web server possible. This design includes like privilege separation and proper design, actually. So it's not just I write a web server in one file, it's solid, actually. So for that, what it does, it's fairly small. A few features. So of course it does static files and directories. Then we do support fast CGI. It is secure by design. For example, in OpenBSD, we had to patch Apache, the web server, to run in a change route by default. I'm not sure if anyone is doing this by default. In OpenBSD, we're doing it for years. So the web server is dropping privilege and change routes to an OpenBSD slash var, slash dot dot dot. So in OpenBSD, shell shock is not possible by design unless you copy a shell binary to the web server route. So accessing like ETC or Etsy, as I learned, files is not possible with a change route of web server. And in most cases, this is totally fine. We had this patch for NGINX for some time, and for some reason it didn't get accepted as well, but OK, fine. We are used to that. We maintain it for NGINX ourselves. But HTTP is the first web server that I know of that is designed to be change route. So you cannot turn it off. If you need to access ETC, then you can change route to slash maybe, but it is not intended to be unchange route or something like that. It's doing more than change route. It's doing privilege separation. I will show this later. TLS is there, of course, specifically for Libre SSL. You might be able to compile it with OpenBSD, but some of the API extensions that we have in Libre SSL are used to pay HTTPD. HTTPD is really like the reference implementation for our TLS library. I talk about this later. Virtual servers, of course, reconfiguration on the fly. So you don't have to kill and restart. You can just reload the configuration while keeping it running. Logging via syslog or files, of course. You don't have to buy a pro version to do syslog logging. It's integrated, actually. You have some basic rules to block and drop connections. And then a user contributed support for streaming, so byte ranges, actually. It's a really nice thing that happened not so long ago. So byte ranges will be in 5.8. It's not yet in 5.7. Then I have something, I think, unique. I have this pink label in GitHub. I use GitHub not for the development. The development is happening in OpenBSD CVS. I use GitHub for the issue tracking. So in the issue tracker, you can create labels for like, won't fix, and whatever. And I created a labor feature writers to mark feature requests from users that are out of our scope, just to remind us that this feature is not intended to be an HTTP. And then if anyone shows up and asks for that feature, again, I can simply point of it. Maybe the user community learned it very quickly that, could you add this feature or is it considered to be a feature writer? So I think it's a really good thing that people get an awareness that not every feature is going to be in the software. So tracking the things that we are not going to implement, the not to do list, I think something really nice. And it works really well. On the other hand, there's hope, some of the requests are rejected now, but maybe I change my mind at some point, right? Just to have like a future release. I was thinking about Apple. Like a major feature is missing in the initial release. And I say, no way. And then maybe in a year, it shows up and everyone is excited again. But what we're not planning to implement is other CGI interfaces in addition to fast CGI. And people are having long arguments with that why UWSGI is so much better with Python. And you have this other framework and blah, blah, blah. But actually, normally you can use fast CGI. And the implementation in HTTP of fast CGI is actually very fast. It's not writing the output of the CGI to a temporary file to serve it to the internet. It's streaming it directly. So it doesn't make sense for us to add multiple latest and greatest CGI protocols. For authentication, we do support basic authentication, but there's no plans to add support for LDARP or something like that. If you need this, then install nginx from ports. nginx is still really powerful and good software. And so for advanced use cases, it's still in our ports tree. For the basic things, HTTP is probably the preferred option in OpenBSD already. We don't support modules, plug-ins, HTTP to support NYEH. I agree. As one of the rare cases when I agree with PHK, he wrote something in the ACM queue, I think, about HTTP 2 and why he's not going to support it in Varnish. And the protocol is insane, actually. So I don't know. Some people want it somehow, or it would probably make sense in RelayD to do HTTP 2 to HTTP 1 relaying or something like that, and our asynchronous design allows HTTP 2 support, but it's madness. So I don't know. I have no convincing arguments to implement it. Or we are not going to support regular expressions. That's what people are writing about, but I'm not doing it. But so rewrites are not possible. Security. It runs change-rooted by default, as I said. It uses privilege separation. So three processes. The parent that loads the configuration, open socket, loads keys and all that. The server handles the HTTP connects. You can have multiple server processes, and the logger is an extra process for logging. We try also from a design point of view, don't reinvent the wheel, don't use our own string APIs. We use libc whenever possible. Even if there is like a possible minor performance trade-off, I prefer to use libc functions. For example, in EngineX optimized HTTP parser, there are like individual string comparison functions depending on the number of arguments. I don't quite remember the names, but there is a str-com for five characters, and then there is a str-com for four characters. And it's super optimized, and it's very fast. But in OpenBST we like to use our libc, because then we can tweak something in our default libraries, and everything benefits from it. And we don't have to look into all these specific places. As we know from OpenSSL, that's actually also a good idea. OpenSSL used its own memory allocator. It's probably still doing that, but we threw it out in LibreSSL. LibreSSL is using the system malloc. So LibreSSL is not doing the exploit mitigation anymore that OpenSSL used to do. Actually it surprised me a lot that a few months after we did this in OpenSSL and removed it from the LibreForg that I found all these custom allocators in the other web server. So okay, that's a design decision for performance. It makes sense there, but we don't want that. We want our hardened malloc that does randomization and use after free detection and so on. So, the privilege separation are really processes that communicate with each other. The parent forks them in the beginning, and then they just run. There's no respawning or something like that. You can configure the number of server processes, and then each server process handles the connection with asynchronous I.O. So there's no threading involved or something like that. And the server processes, for example, don't have write access to the log files. They send a message to the log process. Basically, the nice side effect is you can have multiple server processes and the messages to the single logger get serialized because of the messaging, and the performance is still really good. So, we can open log files with the right privileges that are compatible to the other web servers, but the server processes don't have to touch them. And there are some other things. So the server process, for example, they also run with an unprivileged user. They cannot do anything harmful. And if we ever need another thing, we might add another privileged process. And RelayD, for example, we have another process for the RSA private keys and OpenSM TPD. I didn't add that to HTTP yet, but I will at some point when we did it in LibTLS. So LibreSSL added a new API on top of LibSSL. In the beginning, it was called LibResSL, but this was quite confusing because LibResSL sounds like LibreSSL, but this is actually a part of LibreSSL. So now it's called LibTLS, simply, and it's basically an API on top of it, but it's so easy to use. You should really have a look at it. You can write TLS, clients, or servers in just a few lines, and it does everything right. So Joel Singh is doing the major work there, and I'm doing it from a reference implementation point of view. In HTTP, we decided instead of using LibSSL directly, the old API that you know from OpenSSL we use LibTLS. So this also helps to shrink the size of HTTP. And by default, it only does TLS12 for some months now, and only strong ciphers and so on. So LockGEM, for example, wasn't an issue for HTTP. FastCGI, as I said, was contributed by Florian Opsa, another German. I asked him, can you give me a quote for the presentation? Why did you implement FastCGI? And he said, I implemented slow CGI. That was the CGI wrapper that we had before. I implemented slow CGI because you didn't stop whining on ICB that NGENX can't execute BGPLG. And FastCGI in HTTP, Bob has asked me if I can help you with it. So a little bit back when we removed Apache, there was no run to run the BGP looking glass anymore because it is a classic CGI. And NGENX is not supporting the classic CGI interface, which is the right way to do. So we needed a FastCGI support in the BGP looking glass or a FastCGI wrapper. So Florian showed up and wrote this slow CGI, which is basically a little server that helps you to run traditional CGIs and then talks with FastCGI to the web server. And he used this code later because it's a new implementation of the FastCGI protocol without depending on the official libraries and all this bloat. So he used this to write the FastCGI server code for HTTP, which works really well and we do, as I said, direct streaming. There's no intermediate buffering to a file. The configuration, that's also an example. I hope you can read it the next slide. I will give you an example of the basic web server configuration. So you open a text file and put that in the text file, HTTP.conf, and then it's working. Okay? That's all you need. Actually I'm thinking about making the list non-port 80D as default as well. So you can run it with an empty file or something like that. But that's a minimum requirement. So yes? Yes? We don't do regex, but at the moment we do support the FNMatch gloving rules. So you can do shell white cards basically. What people also do is like you do star.example.com as virtual hosts. On that note, because you probably think that is hella well, I just looked at my own server where I'm running five virtual domains including online manual page display with my CGI, including an online source code repository where you can look at history, the complete configuration part of the 66 lines with HTTP. I'm writing that since November last year. So what he's saying is really accurate. So since then we even added like name-based aliases and all that that helped to reduce it further in my case. So that's a bit more advanced. For example, you can include an external MIME types file. If you don't do it, it provides a list of the most common types like HTML, JPEG, JavaScript. Otherwise you can just use the existing mined up types files compatible to the Apache slash NGINX format there. For that we even, for the MIME types, we even pass these semicolons at the end of the line because as you see we don't need semicolons at the end of the line. Why? The grammar is using the same parser that pfctl does, path.ypumpf or bgpd relady. We use it in many places and open BSD right now. It's our unified configuration actually without breaking LS. We can, anyway. So no, without using an external library or something like this in open BSD we just reuse this path.ycode that originated from psparser and then we use it in all the other new demons, bgpd, ntpd, even relady, all of them. And so you can use macros like in pf. You don't have to write semicolons at the end of the line and it's very similar. Some advanced configuration. Very bright, anyway. So you can listen on multiple ports. You can also add additional server names for name-based aliases. Logging is enabled by default but can turn it off. Locations is the matching. It's also using fnmatch at the moment. As I said, we're not going to do regular expressions. So there are a few options. They're all documented on the man page. As usual in open BSD, I think the man page is in a really good shape so that you can understand what it's doing and it's not like long and you don't have to pick it from the web page or so. Just do manhgpd.conf. Blocking rules are supportive of redirections. You can redirect and so on. Fast CGI, a few other options. It works well in combination with PHP, FPM, of course, but also with many other frameworks. Future work. That is very new, actually. Not even all of the open BSD developers know about it because it hasn't been released yet. CIO is working together with me and a few other developers. The most of the work was done by, I forgot his name. Was it Nick M? You will figure out when it's released. So somebody in open BSD implemented something. CIO is designing it. We're working on a new framework to improve privilege separation and to further draw privileges, but it's designed in a way that it's practical. It's a practical approach. It's easy to use. So basically the kernel limits the interfaces to a subset of the POSIX and the environment that you need in the individual process and it works really well with privilege separation. For example, HTTP's logger process doesn't have to open any network sockets. We have a class basically that we can drop. It's much easier and better designed than like Systrace, for example, or the other things and other systems. It's not trying to solve every possible problem. It's trying to be a practical approach. So stay tuned, actually. It will be really nice and we will use it everywhere, actually. So more features are in preparation like the SNI support. I promised it before, but it will come. We write well, not with regular expressions, but we found a very nice way and that is currently being investigated. It's what we can do, read writes and advance matching, but with matching language that I can understand where I can read the source code and know what's going on. I think I asked Michael Lukas, what do you think about regular expressions? And he said, oh, people are asking me all the time to write a book about regex, but why do we have to write a book about it? So when it's so complicated in the first place, and I don't want to use something in HTTP, just for the pattern matching where you have to read books and books to get it right. So we found something else and I hope that I can release more information about it soon, but actually I just started looking at it yesterday. So yeah, this Tami, I think, Tame in English, I think in Japanese it also has a nice meaning. So Tame will limit the privileges of each process. So you can decide that the server process is not able to, I don't know, to change the system time for the logger, I think the example is good that it doesn't have to open any network sockets and so on. So this is, once again, very easy to use and it will further improve the security of HTTP, but it's not specifically for HTTP. Mostly everything in base will use it. So OpenBC 57 was released in May by the CDs, support the project and have a look at the funding campaign for this year and buy off beer, actually. So thank you.
OpenBSD includes a new web server in its base system that is based on relayd and replaced nginx. OpenBSD includes a brand new web server that was started just two weeks before the 5.6 release was finished. Work is in active progress and significant improvements have been done since its initial appearance. But why do we need another web server? This talk is about the history, design and implementation of the new httpd(8). About 17 years ago, OpenBSD first imported the Apache web server into its base system. It got cleaned up and improved and patched to drop privileges and to chroot itself by default. But years of struggle with the growing codebase, upstream, and the inacceptable disaster of Apache 2 left OpenBSD with an unintended fork of the ageing Apache 1.3.29 for many years. When nginx came up, it promised a much better alternative of a popular, modern web server with a suitable BSD license and a superior design. It was patched to drop privileges and to chroot itself by default and eventually replaced Apache as OpenBSD's default web server. But history repeated itself: a growing codebase, struggle with upstream and the direction of its newly formed commercial entity created a discontent among many developers. Until one day at OpenBSD's g2k14 Hackathon in Slovenia, I experimented with relayd and turned it into a simple web server. A chain of events that were supported by Bob Beck and Theo de Raadt turned it into a serious project that eventually replaced nginx as the new default. It was quickly adopted by many users: "OpenBSD httpd" was born, a simple and secure web server for static files, FastCGI and LibreSSL-powered TLS. And, of course, "httpd is web scale".
10.5446/18666 (DOI)
Yes. Okay. I can tell you how the name LibreSSL happened. I was sitting in Hanover and next to my co-worker, Brett Lambert, and we were just joking. We were joking about why is there open office and Libre office and what's the point about all this Libre thing. And then Brett, his humor is special if you ever met him. And I really enjoy working with him. He said, well, we were looking for a name for OpenSSL, and he said LibreSSL. So I mentioned it in ICB or developer chat, and then Bob Beck answered immediately after, oh, that's great. I registered all the domains. And then it stuck. So that's why LibreSSL happened. It never happened because we really liked the name that much, but it turned out to be something that people can remember. And I think Stolman probably laughs the fact that we call it LibreSSL. OpenHttp.net, I think, is taken by a weird project that looks like an OpenBSD project, and it says OpenSS in your mom. And I tried to contact the website registrar and then the mail bounced. And I don't know. So if you can get this domain and donate it to OpenBSD, transfer it to Henning, who has OpenHttp.org, which he actually registered like 10 years ago or so. But so we decided, OK, if you use a name, there's a discussion in the GitHub issues about the name, actually. It's just hilarious. Just call it OpenBSD, HTTP. These are two words, but I think it should work. Simple names are the best. I'm not a big fan of using fancy names. Actually, I have an explicit rule in my company to not use fancy names, like calling it like Lizard or whatever. It's called HTTP. The name is what it does. That's what we do in OpenBSD as well. I do have a serious question. It sounds like there's an awful lot of code either reuse or duplication between RelayD, HTTPD, you mentioned the parser type, the more parser. How does that all get to that? How do you prevent that from further fragmenting inside the OpenSystem tree? That's a good question, actually. So the good thing is in OpenBSD, we have an ecosystem. So we maintain an ecosystem. So it's really, we sync a lot when we do a change in one demon. We sync it to the other one. And parts of the shared features end up in a library, like the iMessage framework is in our libutil for some time now. Other parts just get synced. But I think it's a very nice fact that we look at the complete thing. So it's not just that's my project, not only care about this one project. If Claudio, who did OSPFD and then a few other demons, does a change in his iMessaging also, then I look at it and sync it to the other horde of demons that I have in OpenBSD, or the maintainers, the people who help me taking care of these demons. Actually, I'm thinking about going to talk about this in more detail in a future conference about how all these demons are written and maintained in OpenBSD, because it's like a very big topic, actually. As long as each of the parts having the ugly code is small, it doesn't cause much work. Yeah. And it basically means, oh, you did this change to HTTP. Why didn't you sync it to relay to you? What also happened? Question about proxy, the feature of engines has come out of the most pain-blend for getting a proxy, an agent proxy, including what do you use in general to serve? Relady. It's just one tool for a job. Relady already does a lot. HTTP, we want to keep it small. So my suggestion is combine Relady and HTTP. Of course, we're looking into ways to improve this. There are some limitations, probably, but Relady has all these features that does health checks and all that in the base system, so you don't even need a plus version or something like that. Is Relady very portable? Oh, sorry. Or I guess in general, do you intend to make a HTTPP, the P-virgin like? Relady is or have been ported by FreeBSD folks. It's in a FreeBSD pod tree. And I think the beloved P-fSense is using it as well. So Relady is already on FreeBSD. So HTTPPD should even be easier because Relady does more openBSD-specific stuff that is commented out in the FreeBSD pod, like carb integration and the routing and all that. And HTTPPD shouldn't be a big issue to port it. The problem is for real portable release, I would need a reliable, portable maintainer. Somebody who shows up, stays around, and is excited about doing test builds on 10 different operating systems using whatever is the current state of the art of automatic or similar and all that. I tried it once to do this for, I think, ICT and I failed and that's not my thing. So the good thing is about OpenBSD, we focus on base system and then when we find a portable maintainer, we have the P version. So I'm still, yeah, if anyone wants to volunteer, but, see, you would volunteer, but it would stick. Anything else? Now it should be fine, it should just work, it's not a big thing. Okay. One more thing, when we imported HTTPPD to my surprise, people started using it very quickly, actually. And for some insane things already. And the adoption is so fast, that surprised me really. But, okay, that's a good thing. And as always, with all the software that we write in OpenBSD and that I write, I'm really interested in some testimonials. Tell me your stories, where are you using our stuff, and that's something, that's the contribution that I enjoy, actually. I enjoy race talk about like relay DVX learn and all that. So let me know if you're using it, and then maybe we can talk about issues and fix stuff. But it's really also about the giving back, it's also about sharing what you're doing. If you have something really secret, you don't have to tell me where, but that's a currency we like. Okay, that's it. Thank you. Thank you.
OpenBSD includes a new web server in its base system that is based on relayd and replaced nginx. OpenBSD includes a brand new web server that was started just two weeks before the 5.6 release was finished. Work is in active progress and significant improvements have been done since its initial appearance. But why do we need another web server? This talk is about the history, design and implementation of the new httpd(8). About 17 years ago, OpenBSD first imported the Apache web server into its base system. It got cleaned up and improved and patched to drop privileges and to chroot itself by default. But years of struggle with the growing codebase, upstream, and the inacceptable disaster of Apache 2 left OpenBSD with an unintended fork of the ageing Apache 1.3.29 for many years. When nginx came up, it promised a much better alternative of a popular, modern web server with a suitable BSD license and a superior design. It was patched to drop privileges and to chroot itself by default and eventually replaced Apache as OpenBSD's default web server. But history repeated itself: a growing codebase, struggle with upstream and the direction of its newly formed commercial entity created a discontent among many developers. Until one day at OpenBSD's g2k14 Hackathon in Slovenia, I experimented with relayd and turned it into a simple web server. A chain of events that were supported by Bob Beck and Theo de Raadt turned it into a serious project that eventually replaced nginx as the new default. It was quickly adopted by many users: "OpenBSD httpd" was born, a simple and secure web server for static files, FastCGI and LibreSSL-powered TLS. And, of course, "httpd is web scale".
10.5446/18665 (DOI)
Thank you for having me here. I'm pretty excited about talking here. Nervous, actually, so please bear with me. My name is Maciej. I'm a developer and system administrator. And I do the DevOps thing. I will be talking about the container I think for through BSD. I will start talking about the technology involved, how to place it in the existing landscape. And the point is that that technology here is not new. I will expand a bit on the container mindset, which is something new about Docker and the rocket implementation. Then I will say a few words about the app container specification with Jetpack implements. And I will finish by talking about Jetpack implementation itself. Containers are a form of operating system level visualization, which is something known when single host kernel runs multiple isolated guest instances. These are also through BSDJ. These are open VZ virtual machines. It's old. The difference between the plain old virtualization, the hypervisor type virtualization, which is what we usually think about when we hear the word, is that in hypervisor type virtualization, the host runs hypervisor and completely independent guest operating systems. Each guest operating system runs its own kernel, has its own virtualized hardware, and is completely isolated from remaining guests. And each guest believes it has all the hardware to itself. The OS level virtualization is when kernel runs, isolates multiple parts of the OS, that they believe that they are the whole operating system, but they're isolated on the host level. But they actually share the user space from the host they are visible from the same process tree. They use parts of the same host file system. The difference between OS level virtualization and the hypervisor is that on one hand, there is less isolation. The guest and host operating system must be the same, or at least binary compatible, because we can run Linux guests in 3BSD Jails as much as 3BSD Linux system called emulation allows that. It has much lower overhead for virtualization, because the system doesn't need to emulate whole hardware. It just needs to enforce access rules. There is no multiple kernels, no multiple operating systems to run. The isolation level is adjustable, and it is possible to share resources. It is possible to cross mount via null FS or bind mount parts of file system. It is possible to share buffers for loaded files, and so on and so on. And the technology isn't new. It started in 1982. It's as old as me, actually. The CH route was introduced into Unix into this year, and this is the system called that allows a process and its children switch and see selected directory in the file system as the root file system. Then in 1998, 3BSD got Jails, and soon other operating systems followed. And these technologies are adding extra level of separation extra additional restrictions on top of CH route. The newest one is Linux C Groups and Alex C, which is what modern container systems that Docker rocket are based on. And these technologies isolate file system. Additionally, they isolate process tree. So guests can't see processes of other guests and host. There is additional restrict isolation between environments. The administrative system calls. Basically, there are technologies to make crude behave like more isolated, more separate system. But the tooling around these technologies is still in virtual machine mindset. They treat guests as a complete system that is managed from the inside. You open console in a 3BSD jail or SSH into the jail. You start services. 3BSD Jails have their own RCD and RC system, their own init. The Jails are usually long running and mutable. They can change state. They can be managed like any server. So they have also management overhead of a whole server. You need to manage access, user accounts, backups, and so on and so forth. In January 2014, Docker showed up. And it brought a new mindset, the container mindset. This is what people have been doing before as well, in closed source, in platform as a service, in-house. Docker was first open implementation to do this thing. The difference is that the containers are service oriented. Each container is a single service. It is not a system. It is not Ubuntu machine or a DBian machine. It is Redis database. It is NGINX web server. It is Rails application server. The guest is managed from the outside via an API. You don't normally log into the containers. You call the API to start and stop them. If you need something changed, you destroy the container and create a new one. The images are immutable and can be distributed, can be shared. Provisioning is fast and is copied on right. Use can almost immediately clone a new container from a pre-made image. The main points that distinguish the container mindset is the layered storage, explicitly defined interaction points. There is a limited number of places where the container interacts with the rest of the world. Immutable images, multi-containers, and as I said, the service oriented. I will expand on it on the next slides. So at the beginning, we have an image. It is just a base root file system of Ubuntu long term support version. It is read only. Once it was written, you cannot change it. It is stable. And to prepare containerized application, we create two child images. One is the ready server. And the arrow means inheritance. This means that only the difference is actually remembered. One image is built on top of one another. So one image has ready server, another has Ruby language runtime. And from the Ruby image, we make another child image with a race application. So let's say Bob wants to start a race application. It starts a container. A container is just a container's root file system is writable layer on top of the image. And it's volatile. You don't care what happens to it. If you stop the container, it can disappear. You are not supposed to care about that layer's data. And it's blazingly fast to start because you already have the race application. You already have the image directory. So you just put in Jetpack, use EFS clone in Docker. You just put an UFS layer on top of it. You don't copy anything. But the application has precious data. So for that, we have volumes, which are persistent directories shared with containers. We need to explicitly say, this directory, we want to keep on host, want to keep that data. This is important because these are user uploads. And the app wants to talk to a database. So it's linked with second container that hosts Redis. Redis has its own volume for persistence. Now let's move that arrow a bit because when Alice wants to run a copy of the same app, she can just clone that. She doesn't need to have any copies of what's already in the images. She just has her own small containers, the thin read write layer, and the volume. And if we want to host another app, we can add to the same hierarchy. Nothing is repeating unnecessary. And if Bob wants to scale his app, then he can just start second container to scale out that will share the same volume, the same Redis link. It will just work. So that's how it looks like. I hope it's not as confusing as it looks like now. And the explicit interaction points of containers, you can interact by command line arguments and environment variables that you start the container with. You define network ports. You define short volumes, and you're absolutely not supposed to care about anything that's not in a volume. You've got standard input, output, and exit status. You don't get to interact in any other way. The immutability is very important. Images once built are read only. Containers, write layer, throw away as volatile. And volumes are the place where persistent and mutable data leaves. Because of that, images are reusable, are uniquely identified, and are verifiable. Once images built, it is set. It is one single set of files that can be identified by a checksum, by a crypto signature. You can verify that it's still the same. You can share it. You can publish it. You can reuse it multiple times. Because it's read only there, you can safely clone multiple containers, multiple children images out of it. Because containers write layers throw away, you can easily exchange containers. If you want to upgrade software that is running in container, you just shut down the old one and start, then you're just like that. Or the other way around, you first start the new one, verify it works, then shut down the old one, redirect traffic. And you are forced to clearly declare, where is the data that you care about? And I believe this is a good thing, because you always know what to backup, where can you write. And the net effect is that besides the stable images, the read only images, the management overhead of running container is of a single service. Get the benefits of the jail isolation of the fact that containerized application is enclosed, is self-sufficient, includes all its dependencies. But these dependencies are not copied, are not repeated, because through the image hierarchy, they are actually shared. And you manage the container as a single service. Docker was started in 2013. And it's actually pretty impressive, because this is to end a half-year-old software that is so popular, that is so widely deployed. I don't know if I've heard about any other software that's been so widely accepted so fast. It's the first free container on time. And that's not the word free, because platform as a service, companies had to be doing that before. Other companies or administrators must have been doing it in-house. Docker was first tool to actually formalize that approach. It defined the approach. It defined the paradigm. It was adopted extremely soon. And because it was defining the paradigm, it was implementation driven. But this has a lot of drawbacks. It was the only free container running for a lot of time. So it basically started to develop a monoculture and didn't need to care this much, didn't need to care about the details, because people will use Docker anyway. It works, it exists, there's no competition. It prototyped the container paradigm. It was the first version, it was the first approach. But because of this extremely fast and wide adoption, it was locked into there early design decisions. Because people were already using it. People were using it on production. There was already a lot of pre-made images, and they had to be compatible because of the success. And with that process, it ended up being implementation defined. And with all due respect, Docker is awesome, but it's got its drawbacks. Nothing, there's no software that doesn't have fonts. And with this whole quick success, with the new approach, this quote comes to mind from the classic on project management that first version will always throw the first version away, will always be a new approach to implement. And Docker, because of its success, didn't get an opportunity to implement. I sincerely hope to see a Docker 2.0 and to see what they come up with at that point. But right now, there are some design decisions, like running with a huge binary blob, on that license on HTTP that are unfortunate. So people from CoreOS, this is a Linux distribution that started soon after Docker got popular. It is a Linux distribution that focuses on Docker and on containers, where the host distribution is just a thin layer to run system D and Docker and any actual service should be containerized. And at some point, they figured out that they want to try to implement their own container on time, because they cannot agree and, as they said, they cannot defend with straight face to their clients some design decisions of Docker. So in December last year, they started their own project called Rocket, which is the first implementation of the app container specification. I will talk about the specification a bit more later. Designed for composability, security, speed. And it breaks Docker monoculture on Linux. It is heavily implemented on heavily used system D. So it's pretty much tied to Linux. What is it?
Jetpack brings application containers, popularized by Docker on Linux, to FreeBSD Application containers are a new approach to virtualization, popularized in last two years by Docker - a Linux implementation that all but monopolized the market. Jetpack is an application container runtime for FreeBSD that implements the App Container Specification using jails and ZFS. I will speak about how the container paradigm is different from the existing jail management solutions, how Jetpack fits into the general landscape of container runtimes, and about Jetpack's inner workings and implementation challenges. A quick demo is not unlikely.
10.5446/18664 (DOI)
open specification. And it's important to know that Rocket is implemented specification first. So first they write documentation, then they write the schema code, then they write supporting code, and only then they implement it in Rocket. And the spec is actually neutral. It's not Linux specific. It's not, and it's really clear. The base part, the first part is app container image, which is specified just to be a tarball. It contains JSON manifest and uniquely under root of s directory with files, and is identified by just a simple checks. So sample manifest looks like that. And here we have the name. We will be hopefully running the image built from this that has this manifest today. It has the name. It has labels, like version numbers, operating system and architecture. You can use this tree to discover the image which was in a moment. It has an application that it runs. It executes the ready server as specified user and group. It's got these mount points which should be fulfilled when application is started. It publishes ready this point. It's got a timestamp and dependencies. Dependencies is how the inheritance is implemented in the spec. This image depends on freeBSD base, which means that its root of s will be unpacked on top of the freeBSD base root of s. The next part is the discovery, which is the means to get from ACI name and labels to the URL to download the image download, it's PGP signature and work to discover public key for the signature. So for example, if you want to discover the freeBSD base image with these labels, what do we do? First, we try simple discovery. We just try to resolve this base URL. So we just add version, OS and architecture labels, put an ACI at the end for the image, ACI ask for the signature, and there is no... It would be pointless to discover a public key this way because it would be published the same way as the image. So for that image, the URL would look like that, and it is a 404. This doesn't exist. So if it fails, then there is a meta discovery process. So we go to just the name, but add a parameter on top of that. Look for certain HTML meta tags, which should redirect you to the good URLs. And if that fails, strip the last component of the name, try again, go up and up and up in the URL hierarchy until... Either we get the meta tags or you're out of components. So for the image we are looking for, we start from trying to look at this URL, it's 404s. So we go here, we get these meta tags. The AC discovery tag specifies that for three of coins.net prefix, we have this URL template to download the image and its signature, and you have this URL that holds the public key. So in the end, we have these three URLs after rendering these URL templates. The runtime is a pod. A pod is a list of applications and a pod can run more than one application, more than one image, and they will be launched in a shared execution context. They will share PID namespace, they will share a network, IPC, and a hostname, but each application, each app will have its own separate file system. So it is a CH route inside the jail. And the more precise isolation dependencies can be also specified in the image manifest or in the pod manifest as an isolator. So the pod manifest will be running for the demo, looks like this. It's got two applications, one is Redis, and we've just seen the image manifest for this image. The second one is TIP board. It is a monitoring dashboard software that I just choose to run because it's pretty. And there is one volume that we share from the host, which is the data directory for TIP board where are the definitions of the panels. But this is not the complete information. It doesn't precisely identify the image and not all the mounts are fulfilled. So TIP board is fulfilled. This mount has a volume, but Redis data there does not have a volume. So the implementation has to, it's called reify, which is basically a watch for materialize, I think, the manifest, which means it has to resolve the name and slap a precise ID on it to be sure that if it has to recreate the container, it will reuse exactly the same image, same for the other image here, and it adds the missing Redis data to your volume, which is empty. It also assigned an IP address for the new pod. And the last part of the system is executor, which is, it's basically the runtime. It's from the executor's perspective, it is responsible for assigning pod UIDs for rendering file system, setting up volumes and so on and so on, and starting the application process. From the app perspective, it is responsible that the app executor is responsible that the app can see the proper environment variables, has UID, JID, and so on and so on. And inside the pod, we have an application container metadata service. So there is an environment variable exposed that leads to the metadata service. So an application can see annotations from its manifest, it can see its full manifest and UID, and it can see manifest and image ID of the current app. This is, this way you can use annotations in the manifest to parameterize the behavior of the container. It also, the metadata service also provides way to cryptographically sign and verify signatures of any data. So one pod can ask the metadata service to sign some piece of data, and then another pod can check with the metadata service that the other pod with that UID actually signed this data. Or the app can ask metadata service to sign its own data to pass it to the user, get it back from the user and see, yes, it's really mine, I really generated it. Jetpack itself is the not production ready, incomplete prototype implementation of the app's c spec for FreeBSD. It's written in Go, it uses Jails, it uses ZFS. As much as FreeBSD's emulation allows it, it can run Linux images. I cannot unfortunately demo it because less update of current made it panic and I didn't update again and it's beyond my capabilities to try to debug it. But I've had it running, it should run 32-bit Linux images on stable, on 10.1. And current with the recent changes that aren't using panic on my workstation also introduced 64-bit Linux emulation, which means that we can use rocket images for AMD64 and we can convert with rockets to a chain, we can convert Docker images to ACIs and also run them in Jetpack as much as Linux emulation allows it. And as rockets breaks Docker monoculture on Linux, Jetpack hopefully will break Linux monoculture in the container world. And this Monday Jetpack will have its half birthday. So we use ZFS storage for snapshots, it's based on clones. I actually am running out of time but... You have actually till 5.30 to improve the schedule to... Okay, great. You have three minutes. Okay, so we'll fit the demo, I think. Each image is root FS is held by the runtime as ZFS snapshot and dependent images are cloned from parent and then updated and apps, applications root FS is also a clone from the parent. So provisioning is as quick as ZFS clone and each empty volume is also ZFS dataset, which means that the empty volume means that it's a way to tell runtime that I don't have a directory on disk, please create a new directory for this volume. And if it's a dataset, we can snapshot it, we can back up it, we can clone it. And in the long run, we want to be able to say, just snapshot this application with all its volumes and make me a copy or make me a copy with a new version of the image. The runtime itself uses jails for isolation and see it root inside for the extra system isolation. I'm also considering using nested jails for up-level isolators but this is a long shot. And volumes are now FS mounted from the host or from the actual volume directory from the actual ZFS dataset into the application sort of as. Image building is turns, I was afraid of implementing that but it turned out to be really simple process because it's just creating a pod from the parent image. Copying builddir and in the long run, I will make the builddir volume, right now it's a copy. It runs the build command inside the builder, inside the pod. And the builder can either include the new manifest or can build it inside the pod we'll see why in a moment. And just after the build script is done, we use pod root FS as new image root FS. Which means that it is, does not include any new syntax, any new jetpack file to build, you can just provide any kind of build script. You're a chef person, go on run Chef solo. You can run make and there are sample make macros to make it easier. You can run shell and this is how I process Linux pods. You can basically use any tool that you want as long as it is a command. So an example, build script, example make file to build the tipboard image which we'll be running as a demo. It's just first this making class path with specify parent image. With specify packages we want installed and the targets are automatically added. And after the packages are installed, the build target, the make file is copied into the pod and this file, the jetpack image is copied as well. So the build part inside the pod is executed from the same make file. So we can have in a single file the preparation outside on the host and the build process inside. So build just prepares Python virtual and install some files, run some sim links and generate the manifest. And generation of the manifest is does execute inside jail, does execute inside the build pod because here you can see we don't specify the version. We ask the tipboard we just installed for the version and we use it here in the manifest. We don't need to specify the version, make file just install the newest and the manifest generated will have the proper version inside. And here we can see from the same image the settings.py file which is example of using the metadata service because we are getting the metadata URL from environment. We just get the IP address annotation which we have just seen that is added during pod creation and use this as a host for this to listen on. So there's still a lot to do. There's custom isolators, there's proper network management and I forgot to write this. The image support would be great as well and Capsicum would be a great addition. UI, the UI is a mess and the code needs refactoring which is probably what I will focus on after the conference. There's a lot of boring stuff to do, documentation, acceptance tests and if somebody has an idea how to test something that's complex, I'd be happy to hear it. My best idea right now is to use cucumber but because I used a lot of Ruby before but maybe there's something better. The native multi-upload support because right now only one application can be started at a time so we will need to open multiple terminals, proper logging that comes with that. That's more or less the laundry list for 010 and for first actually numbered release. So it's in pretty early stage but it works and we're going to see it as we have some extra time. First thing is that I will create a pod. I have the images. I have already built the demo images, the board and the redis to have here to avoid losing time for downloading the packages. So I will just start the pod. Save ID. And we will use the template, not the reified one, the Jetpack current I will reify it. Don't look too much at the format of the output. It will be reworked. It will be prettier but you can see that it inserts empty volume for Redis data directory. It created a new pod, ascended to ID. So we have the new pod with two apps so we'll start the two apps right now. As we don't have any process management we just need to run the apps separately on the terminal. So first we run Redis. In the second terminal we run the tipboard up. And to just fit the data, the tipboard data will run the client. We can see here that the client is receiving data. And on the IP of the pod we can see. Well. We should be seeing a bit of text on the left hand panels. I will just restart the client. You see a bit of text. This is the client. This is the browser. OK, and here we can see a pretty monitoring panel that runs from the containerized images from the pots. Let's shut that down right now. Destroy the pot clean up after ourselves. And I have no idea, maybe somebody knows why the pot that did any network input output lingers so long in a dying state. It can't be a minute. If somebody knows that, catch me after the talk please. I'd be happy to know that. The second piece of demo I would like to show is I prepared and published an image besides the FreeBSD base image. I prepared the port builder image, which basically is the base system plus PKG binary plus ports dialog that can be used by mounting some volumes to test building of ports on in a clean system. We just create a pod. We save it ID for later. We run it already. We run it immediately. We mount ports, dist files, and the ports. They are mounted separately because the ports volume is read only to avoid writing anywhere to the host system. And I can choose to share dist files. Or if I skip it, Jetpack will create an empty volume. And the pod will download the dist files on its own. And here is the image name and an annotation to add that port name is misc slash figlet. And the image right now is not here. We don't have an image named tref.net port builder. And we don't have any trusted GPG keys. So let's do that. All right, Jetpack will first do the discovery. And the font is too big. But you can see, and it will soon scroll. But you can see that it uses URLs just like the ones in the discovery part of the presentation. It downloads the image. It did download the signature. Let's wait until it completes. It noticed that it doesn't have the public key, and it's attempting the discovery, downloads, and asks me, if I want to trust it. Yes, I know that key. I generated myself. The image is imported. I didn't start the metadata service. Apologies. Hm. But the pod is already created. It does have a port builder. So if I just run that pod, this will automatically start the app again, and it will start make again. So it's also useful if make fails. I can simply restart it in the same pod. We don't want to be the docs. And it's done. I can run the console. We just built a port on a clean system. Again, once we're done, let's clean up. That would be it. Any questions, remarks? Yes? Excuse me? Yes, I have to be a privileged user. It does not install setUID right now, and it won't be setUID. It's up to the administrator to configure sudoers file and aliases. Right now, I started because I have entry in sudoers and the wrapper script. Yes? Can you go back and change your slide? First of all, this is fantastic. Where are your slides going to be available to the client? Yes, right after the talk, I will upload them to speaker.dac. I will tweet it, and hopefully the conference account will also read with that. Cool. In the site, what kind of configuration or runtime books are available for sending signals to processes running in containers? So I wanted to, like, on start, is there a code or a script that I can run on shutdown or signal or something like that? The specification tells, specifies, that there is a pre-start hook that runs always sroot inside the container and post stop, and which can be used to, for example, generate configuration sroot while the main application runs as an privileged user. But there are no runtime signals. There's no way to, for example, run a make job when I want something to happen. The specification does not support any extra signals. But probably, if there is any real need, it can be discussed on the app's spec GitHub. Yes? I can convert Docker images to the ACI images. There are tools for that. They can run in ROCCAT. And as soon as Linux emulation will stop panicking with Jet Pack and the 64-bit emulation is stable, because right now, the 64-bit nus.js emulation is only on current, only if you track SVN. But then, to the extent implemented by FreeBSD emulator, it will be possible to run Linux ACIs. This specification says that the main isolation across the applications is the CHRoot. And the general isolation, things like set our limit and so on that don't need the jail. And on Linux, in ROCCAT, it's implemented that the application itself has a root FS, the transystemD, which starts each application in the port. Root FS is a systemD that starts each application in its own CHRoot. But without any further isolation. Yes. And right now, this is not needed possibly to implement. It will be needed to implement some extra isolators on the up level. But only if it's possible to start the jail, but still share the PID namespace, still share network devices. So child jail should be able to share networking with upper level jail. But I'm not sure about PID. I didn't give it much thought. And it's not really required by spec. So it will be done if it makes anything easier or possible. There's people outside waiting for this work. OK, so I think that's it. Thank you very much for listening. Thank you.
Jetpack brings application containers, popularized by Docker on Linux, to FreeBSD Application containers are a new approach to virtualization, popularized in last two years by Docker - a Linux implementation that all but monopolized the market. Jetpack is an application container runtime for FreeBSD that implements the App Container Specification using jails and ZFS. I will speak about how the container paradigm is different from the existing jail management solutions, how Jetpack fits into the general landscape of container runtimes, and about Jetpack's inner workings and implementation challenges. A quick demo is not unlikely.
10.5446/18663 (DOI)
Bonjour à tous, je suis Olivier M. Kochar-Labe. C'est la deuxième fois que je suis venu ici. La première fois que je suis venu ici, c'était en 2007, je crois. Quand je présente mon premier projet d'opensure, c'était la Freelace. Mais le problème avec Freelace, c'est que je n'ai pas un système administratif, je n'ai pas un développeur et je n'ai jamais touché une professionnelle vie dans ma vie. Freelace était juste un tri. En quelque part, j'ai une idée de la crainte de l'utiliser. Ce qui m'a frightenu un peu, mais sérieusement. Il y a quelques années après, j'ai choisi de donner un projet au système AI. Un jour, je me dis si je dois m'aider, je leur donne une réponse inutile, je leur donne tout le projet. Le nom de la main, j'ai essayé de le faire, tout le travail, parce que je ne veux pas continuer. Je ne suis pas un gars de la store, je ne peux pas donner une direction à ce projet. Je ne peux pas créer un cool service de store, si vous ne savez pas ce que vous avez fait. Mais mon vrai monde, mon vrai travail est que je suis un ingénieur de la Network. Je travaille pour Orange. Après Freelace, je me dis que je vais construire un autre appluant. Je n'ai pas créé un projet de BSD Water, c'est juste un projet de freelace, dédicé pour la route des lignes, et ce n'est pas un home-usage de target. Il n'y a pas de web de GUI, c'est un monde de target entreprise. Je vais commencer très rapidement pour présenter Orange. On a beaucoup de customer, beaucoup d'employés, on a beaucoup de services différents, eg Justice cookys, réseau HealthSciも ー vettre le 뭐가 dans l'usage de? Est-ce que vous pouvez donner anytime, à votre pointless, si vous ne venez pas très tard? On a beaucoup de devices, de devices de networks pour les contrôler. Il y a beaucoup, c'est un grand nombre de devices. Juste, on va essayer d'imaginer la... la traffic de l'SNMP pour contrôler cet ordinateur. C'est un truc de fun à faire. Pour l'information, c'est 300... 3BZ Firewall pour ce truc. En ce côté, je suis l'internel networks. Mon travail est de contrôler les networks internes. C'est notre propre office. C'est une position cool, car je ne fais pas face à mon custom. C'est pour ça que je suis allé utiliser des trucs expérimentaux, comme 3BZ Everywhere. Parce que dans cette compagnie, c'est un telco très dur. Quand vous venez avec la source de la source, c'est très difficile d'accepter cette solution. Mais je ne me souviens que de la gennipère ou de l'assistance scolaire. Mais pour la source de la source, c'est très difficile. Mais dans ma position, je peux l'utiliser. En essayant de changer leur man, c'est très difficile, mais j'essaie d'ouvrir une solution de leur source. En utilisant cette entité que je suis allé... pour épargner le code que je travaille sur. C'est juste une extension de l'OMB, mais on l'utilise en orange. Toutes les sources de code que je vais vous présenter sont en ligne depuis la dernière semaine. J'ai dû coucher mon management pour le faire, et j'ai eu le courage de l'accepter. Pourquoi j'ai commencé ce projet? Dans le monde de la météo, dans la météo traditionnelle, nous avons un grand problème aujourd'hui. Le système administrateur depuis quelques années avec la virtualisation, est capable de déployer un grand nombre de machines virtuelles. Et elles sont très belles pour l'automation de cette machine large. Mais même si c'est une machine virtuelle, quand on envoie un paquet sur le roi, c'est un paquet réel sur le roi. Dans le team network de notre compagnie, nous continuons de déployer des hausseurs physiques. Et ce n'est pas facile de déployer, ce n'est pas aussi facile comme une machine virtuelle. Nous avons un problème dans le monde de la météo, pour suivre la météo. Je m'en attends depuis deux ans pour une solution pour la manufacture de la météo. Par exemple, la météo. Je ne sais pas quand j'ai hâte de la météo, peut-être 10 ans plus tard. Maintenant, je crois qu'on est plus que 20 ans, juste pour déployer un simple météo. Et ce n'est pas encore possible pour une météo comme notre météo. C'est un malade. Pour moi, un service est beaucoup plus complexe pour déployer un simple météo. Mais nous ne sommes pas encore obligés de le déployer comme un service. Puis je dis pourquoi pas utiliser les mêmes tools comme le API World, le paquet, le chef et le sable. Mais pour cela, je dois utiliser des service. Deux services dans ce monde, le monde de la météo est difficile. Mais aujourd'hui, c'est plus facile parce qu'on a des trucs comme la météo de la météo. Et la météo est plus et plus plus à la météo. C'est plus facile aujourd'hui pour mettre cette sorte de solution. On a une solution comme l'IntelDPDK et une compagnie de 6 wines en France qui commence à mettre un très intéressant de la surface de la surface de la météo basée sur la surface de la météo. C'était mon premier idée. Je devais utiliser une solution pour simplifier mon travail. La deuxième solution, c'est la suite. C'est notre office mondiale. Parce que nous sommes à Telco, nous utilisons la même template. Un lien délicaté dans chaque partie de la Terre, même dans le pays de l'Afrique, avec un route Cisco, switches de Cisco, ou route de Jolipa, ça coûte beaucoup. Mais pour nous, c'est presque rien, mais c'est assez expensif. Quand vous vérifiez combien de gens sont en office, au maximum, mais c'est plus clos que 1 ou 2, c'est assez incroyable de voir la surprise que nous payons pour cette surface. C'était mon deuxième point pour ce projet. Avec ça, je dis que pourquoi pas essayer de trouver une solution pour réduire la coste. Mon solution, c'était, j'ai un coût de manager, c'est OK. Je vous donne 1 an. Vous faites ce que vous voulez en 1 an. Je veux juste une solution pour ça. C'est cool. C'est assez cool. Je dis OK. Je vais construire une solution. Je vais essayer de, je suis seul, de travailler sur ce projet. J'ai une full liberté de faire ce que je veux. Je vais essayer, OK. Je vais construire ma solution, ma appliance. Je veux une appliance sur le playground. Ce que je veux dans le playground, même si vous voulez installer un CISCO ou un Johnny Perrouter, vous devez mettre ça sur votre desk avec un câble console pour réduire la configuration minimum. Pas la full, mais la minimum. Je ne veux pas ça. Je veux un management centralisé. Et OK, c'est quoi le WebGuy? OK, je construis une question, mais je ne suis pas un fan de WebGuy. Parce que pour des déploiements de large scale, le WebGuy sur chaque device est totalement moins. Je ne me souviens pas de plus. C'est ce que je crois de la déploiement de la nettoyage de la service, comme un de l'autre. Mais mon management veut un WebGuy, parce que ça va réduire notre train de course pour notre équipe. C'est une bonne idée. La deuxième idée n'était pas de ne pas utiliser un lien délicat, mais de utiliser un simple VPN sur Internet et la standard. C'est pourquoi je vais expliquer mon idée sur ce côté. Et ensuite, suivant cette idée, je dis OK. Au sein de l'office, je dois faire ce genre de choses, si je suis venu me suivre mon logiciel. Je dois la box. Il donnera un VPN Wi-Fi de la route. Il donnera un Wi-Fi et un accesse de la route à mon utilisateur. Il sera derrière un Internet moderne, de la route de la route. Je ne sais pas. Mais derrière quelque chose qui lui donnera un accesse de la route sur Internet, il doit être connecté à la route de VPN. Je dois un manager. Ce sont des yeux. Je dois avoir trois futures sur ma solution. Mais je suis en train de faire un arrondi. Si cette solution fonctionne, si c'est validé en interdit, si c'est OK, je vais avoir un problème de manager un nombre large de sites. Je dois avoir une solution scalable. Cela veut dire que je dois pouvoir déployer 1000 VPNs de la route sur plus de VPNs de la route. C'est très facile. C'est comme je vais essayer de mettre sur le papier mon idée. Je dois avoir un manager qui va avoir toutes les conflétations et lui donner un command. Je dois avoir un gateway de VPN. Je dois réduire tout mon task en en train de déployer 100 accesses de la route. Je dois avoir 100 formuleurs. C'est pour l'autorisation de l'accès aux radios. Je dois simplifier. C'est pourquoi je vais utiliser un proxy de radios. C'est un truc. Parce que maintenant, je dois avoir une forme pour le gateway de VPN. Je dois totalement la mettre sur le papier. Juste de réduire mon... Je suis très malade. Je veux réduire mon travail au maximum. Je veux juste un... Je veux dire un monde de network. On s'appelle ce firmware. Mais pour l'IT World, c'est juste une image de route. Je veux juste un firmware pour tout ce futur. C'était mon but. Le prochain est... Aujourd'hui, quand nous devons déployer un devise de network, nous voulons acheter ce code de genie perte. Ils nous dit à nous. Et nous devons réconcilier tout le monde. Sur cette solution, je veux que mon manufacteur déploie mon solution pour moi. C'est à dire que je vais donner un spec technique de ce devise. Et je vais donner le firmware. Il va installer et il va le faire pour moi. C'est une idée pour le devise. Quand le devise arrive sur l'office, aujourd'hui, nous avons besoin d'une technologie qui accède à la route pour installer une configuration de bootstrap. Je ne veux pas que ce gars soit sur le site. Je veux que vous puissiez juste placer deux cables. Un pour l'attention et un pour le bol derrière votre Internet Box. Et peut-être sur le côté de la ligne, si vous voulez. Et c'est tout. Quand cette solution va arriver, il va juste demander un IP par IP. Il va installer sa temps parce que ça vient de la factory. On n'a pas d'idée de ce que la Terre a configuré. Et parce que je suis en utilisant le certificat, il va avoir de bon temps. Et il va automatiquement ouvrir un tunnel sur le Gateway de Rolman. C'est un Gateway qui va accepter à tout le monde. Tout le VPN va avoir la même certificat. C'est un certificat de factory. Et il va accepter tout ce nouveau dispositif. Mais il va les bloquer. Parce que je ne sais pas si les box sont interceptés ou juste une copie. Pour la raison de sécurité, je ne veux pas qu'il connecte à mon Gateway. Quand il est connecté, mon amnistre a juste une liste. Je n'ai pas commencé à travailler sur le WebGrid. C'est une idée finale. Il a juste vu le nouveau dispositif connectant. C'est juste une liste, juste une table pour dire OK. Vous avez déjà 3 water. Vous avez deux nouveaux. Un dans Sydney et un dans Singapour. Vous devez savoir que vous pouvez récupérer le macadamac de la mosaise, le salle de salle si vous voulez. Vous devez savoir si vous avez envoyé un box dans Sydney. Vous pouvez le appeler sur le site. Vous devez juste contrôler ce nouveau dispositif. Ceci est fait par un profil pour ce water. C'est un VPN, un router wifi. C'est un gateway VPN. Ça se débrouille. Ou vous avez un rôle futur. C'est un serveur serial terminal. On a beaucoup de points de présence dans le monde. On dirait que vous avez l'air d'avoir un service serial terminal sur Internet. C'est une idée. La deuxième idée est un portal captive. C'est juste parce que nous n'avons pas besoin d'offrir un access wifi. L'application d'une portal captive pour cela serait une bonne idée. C'est juste planer. Le administrateur a juste sélectionné le profil du box sur des données. Il se met en place, si je dis appeler, tout le configuration est envoyé au box remote. Il reboot. Et ensuite, sur ce stage, il va générer le certificat. Et quand il est rebooted, il sera connecté à l'office. C'était mon target sur ce qui est en train de travailler aujourd'hui. Après juste ce grand concept, j'ai choisi le software. Comment je peux l'utiliser? Parce que je suis très content de travailler sur ce projet. C'était assez facile. Ma audience est le administrateur de la chaine. C'est à dire que le administrateur ne sait pas comment s'y déclencher un service ou un appareil de modèles. Pour exemple, pour un upgrade, ils ne veulent pas juste avoir une nouvelle image de firmware pour installer et rebooter. C'est ce que les nanoBSDs font. C'est ce que les projecteurs de BSDs font parce que c'est juste un nanoBSD. Puis je l'utilise pour cet usage. Je ne suis pas un admin. Je sais que l'un des systèmes de partage est 3BSD. Pour ce projet, j'utilise le branch AD. Je trouve que c'est beaucoup plus facile de supporter si vous utilisez ADs que d'autres branches. C'est assez stable. Pour le management et le déprime de configuration, j'utilise un Sibyl. Comme je l'explique aujourd'hui, dans le monde de la chaine, nous changeons avec le software. Dans le langage de savoir, c'est Python. Un Sibyl est base pour Python. Pour moi, c'est une bonne opportunité pour un Python. Un Sibyl est très facile à utiliser. Juste aujourd'hui, je suis capable de créer des templates dynamique pour le file de configuration. C'est pourquoi j'ai choisi un Sibyl. La deuxième partie pour le VPN. Je commence mon projet avec IPsec. Mais c'était une paix. Parce que j'ai envie de utiliser le protocole de route sur IPsec. Cela signifie que je dois ajouter un layer, un layer de gerry. Mais j'ai hâte de mettre un layer. J'adore le principle de la quise. Cela fonctionne nativement avec l'OpenVPN que par exemple en utilisant l'HipHack. C'est pourquoi j'utilise l'OpenVPN. Pour la software de route, j'ai utilisé le Quagga, comme un guy Cisco. Mais j'ai trouvé un bird. Si vous ne savez pas si vous êtes un gars de route ou si vous ne savez pas le bird, vous devrez essayer. Il a un très bon feature pour filtrer entre les tables différentes. C'est la software de route que j'ai utilisé pour ça. Maintenant pour la sélection de hardware. J'ai essayé d'utiliser... Je suis basé en France. J'ai déjà vu une compagnie de PCN Giants. Ils utilisent un petit box, l'HipHack. C'était un box de la fin. Un box de la fin. J'ai demandé un box, je crois que c'était un box de 10. Il me donne un box très, très, très facile d'utiliser. C'est très intéressant. J'ai fait un benchmark sur ça. Parce que je le targette. Je dis 5 ou 10 personnes maximum. Je n'ai pas besoin d'un grand nombre de bandes. Quand je vois comment la performance de ce box, je peux utiliser ça. C'est sur mon bench. Sur le project de BSD Water, je vais essayer de trouver des solutions belles, une bonne façon de faire un benchmark. C'est très difficile, mais c'est OK. Comme vous le voyez peut-être sur un peu de temps. C'est juste pour l'exemple, la performance de ce box. Sur l'impact, quand on évoque des différents piéros comme EPF, W ou RPF. Ne conclurez pas avec ce type de slide que l'une piéro est plus belle que l'autre. Je ne compare pas les piéros, juste leur impact sur les performances. Je n'ai pas de données sur le point de vue de l'EPSEC pour le moment. Je commence cette piéro parce que c'est très difficile. C'est difficile de bencher un water. C'est plus difficile de bencher un firewall, mais c'est un RFC pour comment bencher un water ou pour comment bencher un firewall. Vous avez des conseils. Vous savez quel type de paramétre vous avez besoin de bencher. Pour l'IPSEC, il n'y a rien. Mais il y a quelques mois, j'ai trouvé un papier. C'était l'université de Bratislava en Slovakia, je pense. Ils proposent une méthodologie pour comment correctement bencher un gateway. J'ai commencé à évoquer une simple script. Maintenant, j'ai une méthodologie pour ça. Je vais commencer à bencher un API cycle ou un VPN. C'est pourquoi ce n'est pas. Je suis très heureux de voir ce que nous avons plus de 6 mois avec ce API. C'est une box très stable. Pour l'IPSEC, nous sommes un opérateur de qualité de l'exploitation de la classe de l'exploitation. Nous avons utilisé un serveur très expensif, comme HP, IBM, etc. Parce que je suis en train de faire un micro super. C'est un serveur beaucoup plus cher que nous avons utilisé pour travailler avec. Il a le size de la apparence de la nettoyage. Pour un gars de nettoyage, c'est très facile de travailler avec. Mais je ne suis pas très heureux de voir ma time-up. Je n'ai jamais atteint une 30 jours de temps en time-up avec cet appareil. Je ne sais pas si c'est le premier de la time-up. Il peut être le premier de la time-up. Mais j'ai eu beaucoup de problèmes. Le premier time-up, c'était avec NTPD. Après 20 jours, il a perdu 2 jours de temps. Quand j'ai essayé de faire le temps-up, il a refusé. Le deuxième temps, c'est de ne pas le refuser. Le deuxième temps, c'était mon SSD-disc, qui a disparu et a repris. Je ne sais pas pourquoi. Vous êtes aussi vraiment piqués sur RAM. C'est la RAM de la default. Je l'ai trouvé quand j'ai utilisé ça. Je suis très heureux de voir la PC Engine. Avec ça, je suis plus heureux. Et c'est le plus drôle. C'est un atome de 8 cores. Et j'ai commencé mon benchmark sur ça. C'est très bon. Je vais vous expliquer. La valeur de la default de la 3BSD est la rouge. La defaulte, quand vous avez 8 cores, le driver de nique crée 8Q. Nous avons un problème avec la 3BSD. Quand je fais mon benchmark, je n'ai pas le meilleur résultat avec le feu à l'ablet que sans le feu. C'est un atome de CPU. J'ai un Xeon server avec 8 cores et j'ai le même problème. Je vais essayer de réduire le nombre de Q pour 4Q. C'est le bleu. Ce n'est pas logique. Mais avec un Xeon, avec 4 cores, j'ai beaucoup de meilleures performances en chaque cas. Mais avec ce atome, c'est drôle car je n'ai plus de performance avec le feu à l'ablet. Si je veux utiliser le feu à l'ablet, je dois réduire le 4Q. Mais si je veux utiliser le feu à l'ablet, je dois garder le 8Q. C'est assez complexe. Je ne sais pas pourquoi, mais c'est juste un feu à l'ablet, mais c'est pas facile. C'est un autre chose. Maintenant, pour le manager, le manager est juste en cible. Il tient un file SSH command. Il tient un file de texte. Il n'y a rien d'autre. Il n'y a qu'un file en cible. Il n'y a rien d'autre. C'est juste un simple VM, ou même un PC Engine APU, c'est assez fort pour le manager. C'est un très, très petit device. Les statues claires de ce projet, je suis toujours sur la phase de concept pour le Poc parce que je suis déployé en route en Europe, en Amérique, mais je dois targetter les Américains et les Afriques, parce que les MTU de leur link alternative sont très différents. En un, vous utilisez OpenVPN avec MTU TrackPicky, je dois continuer de déployer dans ce pays avant de continuer mon projet. Je dois commencer le web GUI, mais je ne suis pas très motivé, mais je dois le faire. Je vais donner une autre idée pour l'opier OpenVPN. Par exemple, maintenant, sur OpenVPN, si vous avez une liste de multiples gates de VPN, OpenVPN va essayer le premier gateway. Si, seulement si, il ne s'en sert, il va essayer le deuxième gateway. Mais en ce cas, je vais envoyer mon box partout dans le monde. Je ne sais pas si mon box en Japon va essayer de connecter à mon VPN server en Europe. Je vais essayer la meilleure latency de chaque gateway et de connecter à la meilleure latency. Mais le problème est que je dois trouver un code orange qui accepte de coder avec l'opier VPN project et de donner le patch. C'est juste un problème administrative pour Orange. Mais je pense que vous avez trouvé. C'est le statut de current. Maintenant, je vais juste vous expliquer. Aujourd'hui, vous voulez construire cette infrastructure de scratch. Si vous voulez l'utiliser, vous pouvez construire. Comment faire ça? Je vous montre avec Virtual Lab. Tout mon test, tout ce design, était construit sur le même super micro VPN gateway. Tout le machine virtual de ma génération d'image. Par exemple, pour travailler, je veux créer ce type de lab. Je veux un manager. Je veux un devise simulant à notre route interne avec l'ospf domain. Je veux un gateway de 2 VPN, un router de 1 Internet, 2 VPNs de 2 laptops. Je vais utiliser pour ceci, juste un script, un command line. Par exemple, si vous voulez l'utiliser, j'ai créé la dernière semaine un démo firmware pour Orange. Vous pouvez le downloader. Ou vous pouvez downloader le code source code de PSDRP. Vous pouvez juste faire l'image de la spécifique. En fait, si vous avez une image de binary, avec le script de PSDRP, vous avez juste le droit de... Je vous montre un script virtual pour Windows, ou juste... Vous pouvez commencer 9 VM avec cette image. Vous allez commencer 9 Vm Full Match. C'est ce que j'ai utilisé. C'est assez clair. Par exemple, pour BI, vous avez dit que vous avez commencé le VM1. Il y a beaucoup de VNIC. C'est connecté à chaque... Il y a le nom de la VM qui est connectée. Et puis vous obtenez ce genre de diagramme. Vous avez juste à utiliser la surface que vous voulez utiliser. Pour information, ce 9 VM, j'ai commencé sur la APU PC Engine. C'est en train de ralentir. Mais... Et quand c'est terminé, maintenant, c'est différent. Vous avez juste à nettoyer le nombre de VTNET pour générer votre lab. Je n'utilise pas la VMAGE ou quelque chose comme ça, parce que je génère une image Full. Je veux que je puisse tout le processus de boot pour le début de l'aventure. Parce que je veux être sûr qu'après la image, après la reboot, elle va être en route. C'est pourquoi je vais utiliser une image Full virtual avec un BI, et pas seulement la VMAGE. Maintenant, nous allons voir dans votre 3BSD, que c'est le nom de la surface. C'est... Par exemple, mon VMAGE, c'est une tue de réel, et j'ai une réelle éternelle. J'ai une name IGB, et j'ai choisi une image de la défaut pour renomber toute la VMAGE 0, pour renomber toute la VMAGE 0, et toute la net 1 éternelle. C'est juste le script RC, qui a une longue liste de renomber la VMAGE. Et c'est important pour tout le monde. Ensuite, quand vous commencez la VMAGE, vous avez 9 clients VPN, en essayant d'acheter un IP, en essayant d'acheter le gateway. Mais vous devez logger au client VPN, vous allez utiliser comme le manager, comme vous installez sur votre manager, et vous devez juste déclarer à lui, « OK, vous n'êtes pas un client VPN, vous êtes un manager. » Et pour cela, parce que je suis en train de utiliser le certificat, vous avez... Vous utilisez le SSH, alors il a besoin de savoir le clé privé de l'SSH, il a besoin de utiliser le clé privé de l'application pour ouvrir le VPN. Sur la image de la défaut, vous avez tous ces files inclus, c'est pourquoi c'est le démon, vous n'avez pas de pollution. Et sur un réel démon, parce que vous avez construit votre propre image, vous devez juste envoyer cet archive. Vous vous donnez des informations à cet manager. « OK, ici est mon nom d'internel, ici est mon nom d'internel, et il vous installera. » C'est juste un script shell, ou vous pouvez juste regarder cet script shell, et il vous désagresse le client VPN, et vous vous en générera un full template de tous les VPNs ou les clients VPN. Il vous en juste générer un file de configuration pour extracter le code de l'SSH. « OK, maintenant vous avez un manager prêt. » Le second est le gateway VPN. Sur un réel hardware, vous n'avez pas de renommer l'interface, parce que c'est le même. Sur cet application, parce que c'est VTL3, vous renommez l'interface, et vous dites « OK, maintenant vous êtes un gateway, c'est le même script que l'avant, mais vous êtes un gateway sur le Internet IP, sur le Internet gateway. » Le rôle est juste un script shell. « En ce moment, vous vous dédisez le client VPN, configurez l'Internel IP, et vous en開始z le bird, parce que nous avons un VPN worldwide d'internel, et ensuite, il va essayer de annoncer son IP par USB F. Mais c'est juste un bootstrap. Le VPN ne fait rien d'autre. Il commence, il est juste riche par tout le device. Et pour l'autre, j'explique que vous avez Internet de route, Internet de route, vous avez desktop, juste le rôle, VM n°5, VM n°6, 7, c'est juste ça. Vous avez installé tous les trucs, les DNS, les DHCP, tout ça. Maintenant, ok, maintenant, c'est fini. Vous avez un manager, vous avez un VPN, ou deux VPNs de gateway dans le bootstrap mode, juste que ça ne commence à faire rien. Vous avez un Internet de route, Internet de route, et vous êtes un client qui essaie de atteindre votre VPN de gateway, vous avez besoin, le premier step est de registrer le gateway, le device à la gateway. C'est le command gateway, c'est le pitton script. Vous vous dites, ok, créé un nouveau gateway, comment est le nom de la source? Comment est l'IP? L'IP est external, le public, le loopback est pour le management, le subnet pour l'unregistration ou la VPN tunnel, pour l'unregistration tunnel sur différentes informations. Avec le script, c'est très petit, vous allez juste créer l'oncible host variable pour cet appareil. C'est juste ça. Pour mettre toutes les informations que vous avez mis en ligne, avec la formule et la web, vous allez juste faire un host file que vous vous générerez avec toutes ces variables. En Jane, vous allez commencer avec le cookbook regardant le gateway VPN sur cet appareil. Maintenant, quand ce command est terminé, vous avez un gateway VPN en cours. Parce que vous avez un gateway VPN en cours, toujours sur le manager, vous pouvez dire, ok, maintenant, montre-moi toutes les VPN routers qui sont connectés dans le mode enregistré ou en roulette. Alors, je dois des clients, il y a un IP public, il y a un IP virtual et c'est l'idée de l'IP virtual, l'IP VPN gateway, et ils sont prêts à recevoir votre ordre. Oui, oui, sur les gateways VPN, il y a deux processus VPN, il y a un processus pour les clients enregistrés pour les clients enregistrés après. Et quand vous avez celui-là, vous vous vous voyez le IP virtual, vous vous vous utilisez si vous voulez enregistrer l'Australie. Maintenant, c'est presque pareil que précédemment, vous vous dites, ok, maintenant, créez-moi le water, le water VPN, il est nommé Sydney, comment est-il le nombre de formes du début, comment est-il la subnet interne pour l'identification, vous vous metsz le IP virtual, vous vous faites juste ce script, vous vous faites juste créer un oncible host inventory pour cet dispositif, et vous vous vous rachetez sur le code oncible. Pour déployer toute la configuration que vous voulez. Et parce que je utilise l'OpenVPN dans la mode de LAN, vous avez dû modifier l'OpenVPN dans le Gateway de production pour les updates de ce qu'ils s'appliquent au CCD file et puis vous vous updatesz. Et ça, automatiquement, vous vous revoiterez le VPN. Et maintenant, le VPN va générer un certificat pour lui. Et puis, le VPN va le revoit et maintenant, il va essayer de connecter à un Gateway de production avec sa propre certificat. Et maintenant, il va avoir accès à toutes les networks internals. Ok, c'est bien, mais maintenant, il va installer votre box. C'est pas suffisant, c'est juste un box hardware que vous installez. C'est un accès à la nettoie. Et pour ça, vous avez tout le truc, comme, ok, vous pouvez délaisser un dispositif. C'est le même commande. Ok, j'ai retiré mon dispositif et il va détruire, si c'est online, il va connecter à ça et envoyer un factory reset, une configuration ou pas, il va juste délaisser le certificat et updates le URL sur tout le Gateway. Et c'est juste le truc. C'est juste en Cible. Ça veut dire, si vous êtes FreeBSD, c'est une structure, parce que c'est un nanoBSD, c'est exact, vous allez trouver votre FreeBSD configuration file dans le standard de la façon. Si vous connaissez déjà en Cible, c'est juste le standard en Cible de faire des choses. Il n'y a rien spécial, juste un script. Et puis vous pouvez essayer de tous vos commandes, vous savez comment utiliser. Et il n'y a pas de magique, de rien de hiéréniste. Je vais essayer de suivre le correct moyen de utiliser en Cible, de l'archi, de tout le reste. Et c'est ce que j'ai fait pour le moment. J'ai dû faire quelque chose. Je n'ai pas commencé, c'est pour l'obgradation de ce device. Comme je le disais, le firmware n'est pas confiécial. Je peux le mettre sur Internet. Ce n'est pas très confiécial sur le firmware. Parce qu'ils ne peuvent pas connecter à notre VPN, mais ils ne peuvent pas faire de rien après. En ce cas, pour exemple, pour l'obgradation, si vous avez 1000 VPN de route pour l'obgrader, vous ne devriez pas utiliser le tunnel pour l'obgradation, parce que vous vous impacterez tous vos utilisateurs qui travaillent. Mais vous pouvez demander à votre VPN de route pour le downloader sur la Internet CDN. Il y a un petit détail comme ça, pour managé un déploiement large scale. Je crois que c'est tout pour ce... Oui, vous avez des questions? Pour le déploiement, si vous voulez... Oui? Vous avez aussi des IPv6? Oui, parce que... Vous utilisez un SPF v3 et vous avez deux séparations pour l'OMV6, ou vous utilisez seulement 3 v3? Non. Si j'avais un VPN v6, je vais l'utiliser juste sur l'interface publique et je vais encapsuler l'IPv4 par exemple, sur Asia, si l'ISP me donne seulement l'IPv6 d'adresse, je vais ouvrir l'IPv6 avec l'external IPv6, mais à l'intérieur, je vais mettre l'IPv4 de route, parce que maintenant, internaitement, je vais mettre l'IPv4. Mais je pense que c'est pour utiliser l'IPv6, oui, George? Comment vous disiez, si vous avez supporté le point de vue précédent, de savoir tout ce qu'on t'a parlé? Qu'est-ce que vous trouvez? Vous avez trouvé quelque chose de mal? Vous avez mis un brésil? Je l'ai mis... Oui, mais je n'ai pas mis... Vous avez mis un brésil? Oui, je l'ai mis. Je ne sais pas, mais je vais l'enlever. Une heure? Je ne vais pas utiliser des packages de upgrade, juste un firmware, c'est un Nailo BSD. Je vais vous le mettre. Oui, vous le mettre. Oui, c'est bien. Si vous voulez, vous pouvez le mettre. Non, pour le gateway VPN, je vais utiliser un CCRC pour modifier mon configuration RC, parce que je ne veux pas que je rebuterais mon gateway VPN. C'est pourquoi je déploie un playbook sur le gateway VPN. Mais pour les clients VPN rotaires, pour les clients, je ne peux pas mettre une nouvelle configuration, et je rebuterais. C'est deux différents moyens de management. Oui? Pourquoi vous utilisez CCRC, pour que vous puissiez le modifier pour la configuration RC? Je ne sais pas. Vous utilisez un template avec 99.5, pour modifier le variable. Vous pouvez le faire? Oui. Je peux le faire. Vous pouvez modifier le code RC.5. Oui, je le vois. Vous ne voulez pas le faire avec un code RC.5, vous voulez avoir tout dans le code pour que vous puissiez le vérifier, pour que vous puissiez le faire et que vous puissiez le faire. Vous pouvez avoir des changements très petits. Oui. Je ne suis pas sysadmin, vous voyez. Je ne suis pas sysadmin, vous voyez. Je ne suis pas sysadmin, je suis pas sysadmin. Ok, je suis en train de le faire. Je suis inquiétant de choisir un facteur de la solution de la solution ou de Linux, et d'en sortir, pour que vous puissiez le faire. Tout est disponible sur OpenBSD, comme l'Opso, comme l'Opso, et peut-être d'en sortir pour la production, pour utiliser OpenBSD pour les renders, parce que la certification et les compliances et tout le reste, c'est un facteur de la décision. Pour FreeBSD? Oui, pour FreeBSD. Ah, parce que sur FreeBSD, c'est un projet d'enregistrement. Je fais beaucoup de benchmarks. Comme vous le voyez, aujourd'hui, je ne vois pas l'enregistrement d'un OpenBSD qui est sûr que nous avons plus que un core. Mais c'est juste pour ça. Vous pouvez prendre un peu de basicité. Et aussi, pour les solutions d'invélation, il y a des solutions vraiment cool, comme un point de vue, c'est un point de vue de la production, d'activation et de la production. Mais c'est basé sur les mires. Mais c'est basé sur les mires. Si vous voulez trouver une compétence sur les mires aujourd'hui, c'est beaucoup plus facile de trouver un simple... Si quelqu'un sait comment travailler x86, c'est beaucoup plus facile. Je suis un gars malade. Mais c'est une bonne compétence. C'est une longue compétence. Oui, c'est une compétence plus facile. Mais avec ça, je peux faire un BIVE. Je commence 9 vm avec BIVE si je veux. Je peux le faire avec les mires? Et la compétence n'est pas très différente. C'est très très dur, la piscine de la période. Oui? Quand vous avez votre mesh et vous avez le diagramme de tout ce qu'on appelle et vous vous rafraisez sur le point de l'OSPF. Je pense que l'idée est que si un node ou une connecte s'en va, vous pouvez le faire. Oui, parce que je ne l'utilise pas. Je n'ai pas besoin de l'interface. Je l'utilise pour la piscine. Toutes les autres sont faibles. Je ne l'ai pas configurée. Le OSPF n'a pas d'idée. Parce que ce que je voulais dire, si vous avez le mesh, tout va bien. Oui, vous êtes bien. Mais non, c'est le moyen que je vais utiliser tout mon script. Il connecte tout mon water full mesh et je l'utilise juste le lien que j'ai besoin de mon lab. Pour la topology de mon lab. Je suis malade. Je ne veux pas le faire avec le GenS3, par exemple. Vous devez cliquer pour sélectionner. Non, c'est trop long. Une autre question? Je suis assez curieux que vous avez expérié le micro. Parce que nous avons fait un détail de 100 mois et on n'a pas le droit de le faire. Je pense que vous avez un matbox de quelque sorte. Parce que nous avons des milliers de choses dans le monde qui ne sont pas en train de faire. Et c'est vraiment très solide. Oui, c'est pourquoi je ne veux pas dire que c'est peut-être un problème hardware. Je n'ai pas assez de réplique pour le jugement. J'ai plus de 10 piscines PCN. Je peux faire des stats. Aujourd'hui, je vais juste avoir une VPN. C'est pas suffisant. C'est pas suffisant pour une idée. Oui? Qu'est-ce que vous utilisez? Qu'est-ce que vous utilisez? Qu'est-ce que vous utilisez? Ah, je n'ai pas mon slide avec moi. C'est une photo intéressante. Oui. Au début de la bird, j'ai eu beaucoup de difficultés parce que je n'ai pas compris. Il y a une grande séparation avec la table routière de bird. Quand vous entendez ce que c'est de l'export c'est le mot de bird. Quand vous entendez ce très grand outil, vous pouvez avoir un filtre entre les tables différentes. Vous avez une table routière, vous avez le kernel, le table système, vous avez le table OSPF. C'est une table séparée. Vous avez le même type de filtre entre les tables. Je l'aime. Si vous avez un service de route, vous pouvez avoir une table BGP mais ce n'est pas lié à votre système, votre table kernel. C'est génial. Oui? Vous avez un outil de utilisation pour, et vous vous plaignez d'implementer, quelque chose comme Cisco's DMVPS, pour un maximum automatique? Non, c'est trop complexe. C'est la plus... Je veux cette feature pour ma solution mais je n'ai pas trouvé ça. Je l'ai utilisé au point VPN parce que c'est déjà vraiment... Ah, je dois vérifier. Mais ça fonctionne avec le point PAN. Je le solve mon problème avec cette solution. Je ne dois pas. Ok. Qu'est-ce que vous voulez apporter au point VPN pour la non-l'attentation? Non, ok. Ok. Une autre question? Non? Great. Merci.
Presenting a project for large-scale and plug&play network appliance deployment. How a lazy network administration do for building, deploying and manage thousand of network appliances all over the world ? This talk presents an example of solution combining FreeBSD, OpenVPN and Ansible for answering to this question. Starting from the initial needs of providing: multi-role network appliances: VPN Router, Wifi Access Point, Captive Portal, Firewalls, etc
10.5446/18661 (DOI)
Welcome to my presentation on Mandok. The topic this year is becoming the main VSD manual toolbox. My name is Ingo Schwarzze, Schwarzze at OpenVSD.org and I have been an OpenVSD developer for the last six years. While I've also contributed in a few other areas, it happened that my main focus became documentation tools. One thing that's always nice for starting a talk is reminding everybody that we are usually standing on the shoulders of giants and the pioneer in this area is really Cynthia Livingstone. You are seeing here on this picture, she designed the main documentation language we are still using today in 1999-1990, the M-Doc language. She implemented that language herself. She translated the whole corpus of BSD manuals from the old man language to the new M-Doc language and in the process she also rewrote all the text that was still encumbered by AT&T copyrights. So all that by one single person. When I talked here about Mandok four years ago, when I first presented the Mandok toolbox, my focus was this is a completely new tool and we have to train it to do real work and you'll see how the beast has matured until now. Okay, so the key point, we are talking about documentation tools. The key point from my point of view about system documentation is really that all documentation should be in one place and in one format, so not have a part of the web and a part in HTML and a part in user share docs and whatnot, one place, one format. That makes it easy to find, easy to read and easy to write and only if it's easy to write there is any chance that it will be correct, complete and concise. That of course puts a particular focus on which system to use. If you want to use one system, the basic markup syntax we are still using today goes back more than 50 years. Jerry Salzer started the Roth-Ranoff markup in 1964. It is unobtrusive, it's diff-friendly, it's easy to hand edit and there are simple tools to produce high-quality output in various formats from them. The basic manual structure goes back to the very first version of Unix to Thompson and Ritchie. The manual language still used in Linux today comes from the famous version 7 Unix, the last resourced Unix version that is publicly available, but the real revolution in documentation languages really was the invention of semantic markup of the M-Doc language in 8990 by Cynthia Livingston, which then got to the world with 4.4 BSD. At about the same time James Clark wrote the GNU implementation of Trof, which even though its GPL software dominated the toolbox world in the BSDs for more than two decades until finally around 2010. We started to introduce the BSD licensed Mandoq toolbox step-by-step into the various BSDs by now. Practically all major BSDs use it, open BSD, free BSD, net BSD, even Illumos has switched to it. Now what is in that toolbox we are talking about from the user perspective it has become really simple. From the user perspective you basically have one user land program that you are calling MAN, the manual viewer. That thing when you call it once there's three things. It finds one manual page in the file system or using a database. You can decide which file to find either by giving its name or by giving a search query. Then in the second step MAN will format the manual page and in the third step it will display it usually using a pager. Even in this very elementary area there's quite some progress since I presented here last year. Last year I said MAN, the manual viewer is out of scope that's not part of the toolbox. Now we have one unified interface for both for the viewer, for MAN, for the formator, for MAN doc and for the search tool apropos. They all take the same options and very new this year 2015 we have a unified and very much simplified configuration file format that I'll show. The toolbox also contains a few auxiliary components. You have a database generation tool, you have a syntax checker, you have a parse tree debugger, you have a format converter from MAN to M doc that is built in, pardon from M doc to MAN that is built in and you have output front ends for various formats like HTML, post-crypt, PDF. So before really starting with the individual topics I give you a very brief overview which topics I'll talk about. I guess I never talked about so many different topics in a single public talk. It's just because so much happened in the last year. And I hope I won't get mired in that swamp of topics. The unified user interface, the new viewer and configuration file format is the first thing, the second is the same for the web, the third is improved formatting of mathematical equations, then improved unicode support, hunting bugs with a fuzzer program, detecting use of unsupported features, converting manual pages from another language from the pearl documentation language to our common M doc language. These are the main subjects, the main things that happened during this year and I will wrap up providing a status in various operating systems and hinting at a few possible future directions. By the way, the pictures I'm using for illustrations are pictures taken by other people along the road of the bicycle tour I did around Southern Ontario just after last year's conference. Okay, so first thing we did, OpenBSD no longer uses the traditional BSD MAN program but an implementation integrated into Mandoch into the formator. The traditional setup was that MAN would fork and execute twice, once to call the formator and then again to page the output. Right now the program finding the files in the file system and formatting is the same program. Now, which is the point in doing that? The point really is to have a unified interface for all the three main front ends, which means that when you call the viewer, when you type MAN, some page, you can use command line options that were traditionally only available to the formator, like saying which warning level you want to see or saying which output format you want. You can now do things like in OpenBSD, not yet in FreeBSD, that will only be available in FreeBSD 12. You can say things like MAN minus T, HTML, just the name of the manual and then pipe it directly to links or something. On the other way around, the search tool now has access to options that traditionally you only had in the viewer. The search tool apropos normally just lists a number of title lines for manuals. You now have options to say I want to see the file names instead or I want to see the command synopsis instead or even I give it a search query but I want to see all the full manual pages in one less right away. So that's quite flexible and you don't need to remember different options for different programs. Besides, it allows a simpler configuration file format which we use for you minor things that I'm not going to name individually except that one of them is quite nice. We have to maintain one less userland program and the traditional BSD MAN is quite old code so it's nice to no longer need to maintain that. There are two other things that we can gain in the future that we have not yet exploited. In particular, many library manuals document not only one but two, three, twenty functions and those typically have multiple entries in the file system hard links. With the new database we can get rid of all those hard links and can get rid of thousands of files in the in the installation and another nice thing that can be done with this it is already implemented it's only not integrated yet in OpenBSD. We can have an interactive chooser so you say apropos something it comes up with a list of programs that match and then you can just choose one of them to open the manual directly without exiting apropos first and typing a new command. Okay any change of a program for a new program will come at a cost. In this case the cost is that database lookup is slightly slower than file system lookup. Then again on my notebook the additional delay for displaying a manual page is on the order of 10 milliseconds. There is another cost when you install a new manual page on the system and you want to find that new manual page with the search tools you have to update the database but the at least the OpenBSD package tools run the required commands automatically for you so you don't need to do it manually and even if you forget it or it doesn't work then the man tool itself will still work you will still see the manuals when you explicitly call for them you just won't find them in the search tool until the weekly make what is run. So there are very little downsides if any compared to the additional features we get. Oh this is a nice one the old configuration file format of the manual viewer this is the list of all the features I identified in the old configuration file format that are completely useless. I'm not going to bore you reading that this to you I'll just pick out one you can configure decompression filters like if I have a G-SIP man page I want to use GUN-SIP on it. Well the old configuration file format allowed you to configure different decompression filters in different sections of the manual so if I have a kernel manual that is G-SIP I want to use this decompression filter if I have a user-length manual this G I prefer the other decompression filter and dozens of them so the format was so complicated that really consistently everybody hated it all both nobody used it even though people had good uses for it some things you want to configure so I came up with a new configuration file format that basically has just two directives the things that people actually need one is you can specify a man pass you give a directory name it takes that directory as a complete tree of manual pages on consistently uses that tree across all the tools and the other directive is specifying output options for example for the terminal saying how wide the terminal is for HTML saying which style sheet you want to link from the generated file for PostScript.pf specifying the paper format things like that that's very easy to use and needs almost no learning a few a bit of the namespace in the new configuration file format has been reserved so once people ask for it I'm planning to implement the following features an alias directive which makes it easier for people using languages like TCL or so to make whole trees more easily accessible with the minus s option then a sections directive that allows people to configure custom sections and to change the search order of sections and the filter directives in case any operating systems are using other compression formats besides GZIP however so far nobody asked for these formats and as long as people don't need them I say kiss keep it stupidly simple I we shouldn't implement anything that people don't actually want so be aware here I really reduced the functionality inside open BSD and people don't yell at me so it's not always bad to make things simpler good so far I talked about viewing manual pages on the command line on the terminal we also have a CGI for viewing them on the web which has basically identical functionality open BSD on the WWW open BSD org website no longer uses the traditional manned CGI Perl script by Wolfram Schneider from free BSD but a manned CGI implementation included in the manned doc toolkit the traditional setup was that manned CGI would fork and execute the systems man command that man command would fork and execute a graph or later man doc but not in HTML mode in terminal mode then the CGI script in Perl with regular expressions not even with a with a library would parse that terminal output manually converted to HTML incredibly ugly the new manned CGI is one single C program yes some people do write CGI programs in C and it links in just the components needed the man doc parses the database client code and the HTML formata code and directly generates clean HTML code with the benefit of providing full semantic search capabilities so just at the command line like at the command line in the web interface you can use search for all the semantic gimmicks one thing that was surprising about this manned CGI was that even though when I started at last year's main hackathon in Yubilana all the underlying functionality was already quite mature and ready there were was about a dozen different components to to tweak so the configuration syntax for this thing had to be tweaked so what I say it is that there were even in the even though we had a good mature code base to really exchange the manned CGI completely a surprisingly large number of small things had to be fixed and adopted so even in a seemingly small project you can sometimes be prepared to have to do a lot of work but in the end it paid off because quite a few of the features we originally implemented for the web manual viewer turned out to be useful for the command line to one of the new command line options derives from that the code for that was originally developed for the web was then reduced for the implementation of the man command I talked about before even the the way the apropos command searches results was originally developed for the web viewer so there were quite a few quite a few benefits but one thing we completely overlooked at first is that even if you are not doing anything with HTTPS or authentication or limited access whenever you do anything on the web you at once get into security issues simply because you are taking untrusted data off the net and you are processing it the processing it itself could be harmful even if it only puts Lord on your server and the output you throw at the user could be harmful to XSS or whatever so at some point we decided that we had to audit the man CGI code for security issues and the way we did that was in three ways first starting with all the untrusted input tracing forward and looking what is this input used for on the other hand starting from the other end locating all the places where the CGI is printing output to the user and tracing backward where is that data coming from is it coming from sources could it be clobbered some way and given that there are two modules that need to be audited the steering program and the formator we also identified all the places where data is transferred from one module to the other and started auditing from that interface into both directions of course all these these tracings hopefully end up in the same code paths but you really don't want to miss any code path so it's good to have a bit of redundancy in such an audit initially almost all the security issues found were reported by Sebastian Marie who by the way in the meantime has become an open BSD developer but I read it all the audit to make sure that nothing was missed and I'm now reasonably confident that we found most things that shouldn't be in a program run on the web so Baptiste Darussin is planning to replace the free BSD man CGI to and he's planning to use this exact code here are a few here is an overview of the security act issues we actually found one important class was unvenuated input unvenuated input in particular in the URI both in the path provided to the URI in the URI and in the query string that we led to two kinds of problems first reading unrelated files from the file system on the server and possibly disclosing content of files that were never intended for display and on the other hand information disclosure in in error messages so even when the program realized okay this is strange I shouldn't be doing that and the error messages might reveal stuff to the to the attacker he shouldn't know and of course the fixes were rejecting absolute paths rejecting ascension to parent directories validating stuff upfront paying attention what we displayed in error messages and the other type of problems were mostly cross-site scripting issues partly due to invalid characters embedded in query strings and partly due to the stuff embedded in manual pages I mean when you run a manual page server you will previously for example also serves manual pages from ports and you don't really know what people put into manual pages in ports so that should not be able to trigger cross-site scripting attacks in your CGI front-end basically all that these CGI these excess s things require getting the encoding right which turned out to be quite tricky because some of the output needs html encoding some needs uri encoding some even needs both so it wasn't only a question of doing the encoding of the at the rice places but doing the right number of encodings and choosing the right encoding at the right places but I guess we figured it out the end one thing that is almost impossible to fix is a regular expression does attacks we prove at on the command line we allow people to search in manual pages using ready regular expressions and we wanted to have the same functionality on the web so people can enter regular expressions into that thing and regular expressions are so powerful that you can't really prevent them from clobbering server resources so the only mitigation we came up is just limiting the total time a CGI process can run and so far nobody has brought the server down I hope it can stay like that if if it doesn't we might have to switch off regular expressions there's probably no better way okay so so much about the various display tools yes you're welcome it's extended regular expressions by the standard routines contained in the C library might be I must admit I'm not really up to date whether those are less easy to exploit that might make sense on the other hand getting on the nerves of people with restricting to be our yeah but it might be better than switching if really we suffer from attacks then we should probably consider that it's a nice nice idea yeah okay so so far about the various viewers now let's get to some things about parsing and actual formatting the main progress last year was made in the area of mathematical equations in manual pages now I admit that the EQN language is not used as much as m doc man and TBL but there are some manual pages containing mathematical equations in particular in xorg and a bit in the mouse libraries the parsing works quite well chris steps the mostly finished that in 2011 but the the formatting was really ugly and we had to apply some paint to that in HTML output chris steps rewrote the output module to generate math ml in the context where he also switched the output to HTML 5 that was actually quite straightforward the parse streak falling out of his EQN parser can be translated one to one to to math ml it's less than 200 lines of code and the the output looks quite beautiful in a graphical browser just look at that on the open BSD website in the open manuals look at a few x manuals containing matrices and so it works quite well terminal output at first sight seems seems harder how do you format mathematical in equations on an ASCII terminal and what GNU EQN does is they try to move elements up a line a down a line and try to draw lines from minus signs and such stuff and the results are just unintelligible it doesn't work at all so I chose a different approach and rewrote the terminal output as a linear textual representation and here you have a few examples of how fractions and matrices and functions look like admittedly that's not pretty but at least you can figure out what it means and that is the main thing for manuals so the status now is that mandoc actually formats equations much better than GNU EQN both on the terminal and for HTML while PostScript and PDF is still the domain of of the full thing of GNU EQN another thing about processing and formatting is a internationalized manuals multi-byte characters now admittedly non-English manuals have a lot of problems they are they are rare they are hard to maintain even if you try to maintain them then they tend to get outdated and once they are outdated they are arguably worse than nothing however it doesn't have at all if in addition to all these problems the tools hinder reading them for that reason Christops quite early implemented basic UTF-8 support but in the same way as it's done in graph it required a pre-processor to transform the UTF-8 input into rough escape sequences and then it required a specific output options to tell the thing okay I want UTF-8 output so in addition to all those problems you also had to when when actually viewing the manual pages you had to take care of these special options now this year I integrated the pre-processor right into mandoc so that the input encoding is automatically detected and I switched the default output mode from T ASCII to T local which means as long as you use POSIX or C local it doesn't make any difference for you but if you have your stuff set up for UTF-8 output anyway then it just works so right now viewing a Japanese or a Russian manual is no longer more difficult than viewing an English manual and I think that is how it should be okay so much for functionality now let's come to the things that don't work because probably all know programming is kind of about getting things wrong this predator here found us a lot of bugs a fuzzing tool is a program that runs another program trying to feed it varying input trying to crash it or hang it and the specific things advertised about the American fuzzy lop fuzzer program are that it does compile time instrumentation of the tested code and has genetic algorithms such that it can discover test cases itself and can execute as many functional code paths as possible with a goal of full functional coverage now getting full functional coverage for terminal output in mandoc on modern PC hardware takes several days of round the clock running but that it's exactly what Jonathan Gray an open BSD developer down in Australia did repeatedly since end of last year and I was a bit surprised that he found more than 40 issues grand total now what were these about a third of them were cases where we assumed that our data structures had certain invariance and these were actually violated there were cases of general environments very violated that way and cases of macro specific ones another third were logic errors that were just arising from excessive complexity of the code in part unavailable because the language design is so complex the the topic here is badly nested block in the m doc language I talked about that four years ago I can't repeat it for time constraints partly because of complexity in the implementation the specific thing that caused these bugs was macro rewinding so you open a block and then don't close it again or you close a block that is not even open all that has to be handled and if you do it that in creative ways then it might crash the program so these were two-thirds invariance and complexity the remaining third where the things you expect to cause security issues like missing input validation buffer overflows use after free so it's not only interesting to look at the causes of bugs but also the severity of the issues isn't one of the things that you're supposed to do is not write your own string parsers right your own string parsers well it's I don't know I mean here but everybody there you know they wrote their own string handling routines and that was the source of all the problems as soon as we adopted open BSD's Lipsy the problems went away well the the routines available in the in the open BSD Lipsy are mostly routines for concatenating and copying strings like StrelCut and StrelCopy but I'm not aware of any open BSD specific string parsing routines so and that yeah but but you are right that the top the top five issues found were really buffer overruns where a string parser parsed the string that was broken in such a creative way that it skipped over the final zero and then if you if you return that parser to the calling code and that calling code might modify the buffer it can even end up as a write buffer overrun okay so kind of one lesson learned here is that I mean this code was written by moderately experienced people in the context of open BSD so you should assume they were aware that there are dangers and still these things were in there and what is particularly interesting is that this wait I'm just losing power the particular interesting thing was that the easier the stuff would have been to avoid the more dire the consequences so if you just if you pay attention from the beginning and then after a certain time audit the code once again looking for the simplest thing like being careful when passing back when passing around pass pointers making sure you don't pass zeros watching out for use after free not forgetting to validate input being careful with arithmetic operations you will probably still find stuff and find things to fix some of the con keep in mind all this in code written with security in mind from the first point it can't be stressed enough that some of the well-known things come up again and again the largest numbers of bugs and absolute numbers were in the most complex code so yes it does pay off to avoid complexity if you can the distribution of the bugs across the various modules was more or less proportional to the size of the modules not not a big surprise but yes it does pay off to keep code small so in this case we were about one dot five serious bugs per thousand lines of code something between half and three serious bugs per line of code is a range that you might expect in this case we had a few aggravating factors in particular the languages we are parsing have no formal definitions that's nothing has really ever been written down clearly cleanly they are not designed according to any strict paradigms but rather evolved historically so the part of the requirements and the design goals weren't known from the start but discovered piecemeal so again and again we had to break existing logic to change existing logic and existing invariance that might have contributed to to to this thing one thing that would have helped tremendously I didn't find the time yet but if you are working on an important project and have the time to spend the effort I would recommend to explicitly specify for all your mayor data structures which invariance you intend to guarantee and then audit your code whether all places changing these data structures actually respect the invariance and say audit your code whether all places reading from the data structures don't assume anything else except those invariance you have explicitly specified in the case of mandoc that would have caught about a third of all those issues which is quite a substantial fraction okay so well broken stuff not only code is broken manuals are broken too now what do we do it about it well we tell the authors we tell the portes for that we have three message levels in mandoc the lowest one is warnings a warning means the author should be aware that the quality of his code could be improved it's clear what he means but it might cause portability problems with older tools or whatever an error means the author has written something but we don't really know what that means it's inconsistent it's it's likely that information gets lost the user doesn't see the full text is intended to see others the the formatting might be completely clobbered that's an error this year I introduced the third level which is called unsupported that's not so much for manual authors but for porters it tells that mandoc has the impression okay this is probably valid code but I know I can't handle that yet so better use graph for formatting this particular manual and there was historically a fourth level and I'm quite proud of finally after five years having gotten rid of it it was called fatal that means you threw a manual at mandoc and it replied no that's so weird I don't give you any output well there's some text in it so it should display that text and we've finally reached that whatever you throw at mandoc you get some output even if it's empty it no longer there's nothing good so in the base system the problem is not the problem of broken manual pages is not really hard if you find a broken manual in the base system you fix it and are done with it but in ports that's not really an option you can try sending patches upstream but not likely that something will happen so good news is that by now after several years of development of mandoc about 95% of ports manuals just work but what about the remaining 5% in open BSD we mark those ports where the manuals don't work with mandoc with a use graph variable in the make file and there are still about 200 such ports these manuals are form a pre-formatted at port build time and the formatted versions are packaged the advantage obviously is that end users get perfect manuals for every port but well from a formatting perspective the content might still be off the inconvenience is that you need support in the ports infrastructure for such a thing mark SP has written that years ago and it works and the porters need to maintain this use graph variable for every single port for that reason to avoid this work free BSD has chosen a different way what they do is the man program doesn't run mandoc right away but first asks mandoc what do you think about this manual page can you deal with that and then if mandoc says you are that looks good it's run again that time for real and if mandoc says no I don't think I don't like that then growth is run instead inconvenience are considerable of course because at the time you run the man command the page has to be parsed twice that costs time and particularly bad in case mandoc doesn't realize it's unfit for the job it's too confident the user gets incomplete it on miss formatted output and on the other hand if mandoc is shy and says no I don't want that even though it could handle it then time is wasted on running graph so that's kind of a trade-off in which way you do it net BSD has a very creative way to handle that if I understand correctly that they just ignore the whole problem and it seems to be good enough for them I don't hear complaints about those probably about 5% of broken ports manuals in net BSD well in the future we might improve it in two ways one way would be to improve low-level Roth support in mandoc and to remove use graph from various open B is deports another way would be to improve the W unsupported logic such that the number of problems in free BSD is reduced and at least nutty so Christian Weisgerber a colleague in open BSD myself hope that these two ways will ultimately converge so that we can go the net BSD way and everything just works with mandoc but we'll still take some time okay at this point I'm through with the mandoc toolbox in the strict sense now I'm talking about one of the companion tools a converter from the pro pot format to the M dog format because in that area we have made quite some progress during the last few months why is pro pot relevant well after the M dog format used in BSD's and the man format used in Linux I guess that's the third most used format for for manuals it is used by Pearl it is used by open SSL it is used by FF M peg by various projects usually these pages are converted to the old man format by the pot to man program which is itself written in pearl the downsides are that you get no semantic searching and that the developers have to learn another formatting language the pro pot language which is less powerful so learning two languages and one is even less powerful than the other doesn't make a lot of sense so we have decided to convert the libre SSL manuals from pro pot to M dog Anthony Bentley has done half that work already last year and it is committed so the lip SSL manuals are done I'm currently working on the lip crypto manuals to gather with a guy from Düsseldorf Max Fillinger for Anthony it was still quite hard because he only had a prototype of the pot to M dog tool and it required a lot of manual post processing I've now improved a lot of details in particular in things like white space and closing punctuation and quoting you might say well that all seems quite minor but you need to keep in mind that the goal is to commit the converted manuals so the generated code must be clean and maintainable because after that developers are going to hand edit it for decades maybe so I'm cleaning it up by hand is quite tedious we are talking about hundreds of manuals here but admittedly when talking about improvements of this pot to M dog converter the conceptual things are even more interesting keep in mind that the pro pot format has no semantic markup it basically only says things about the physical formatting bold and it'll like and so on but in the output in the M dog output you want semantic markup and I've written some heuristics that look at really at the text where there are parent thesis and commerce and blanks and figure out oh this might be a function declaration in particular when it's in the synopsis and add the missing markup on the fly for function types and function names and function arguments and so on and not only that but it also uses hash tables using the ohash library written by mark SP and remembers the names such that when later in the description these function names and argument names reoccur they can be found in the hash tables and the correct macros can be inserted in the text which considerably reduces the amount of manual post processing that needs to be done I released all that last month so it's quite new one thing that is interesting about this is that it is somewhat similar to stuff Eric Raymond has done in the context of of the tool he's using for converting man manuals to dog book how is it called that thing I've forgotten the name but yeah so a dream for the future is that we might use similar logic like the one developed here in the future to extract semantic information from man manuals and enable the semantic searching that we have for mdoc documents for man too but there is no clear concept yet how to do that it's just an observation that the the algorithms needed points extracting these information from various formats be pro pot or man or whatever are quite similar good so let's get to the status in open DSD most of the work was already done last year so there is a lot that was complete by 2014 in particular man dog is the only documentation formata in open BSD base now for almost five years and the search tools had have been switched before last year's conference the main progress here since last year is the new online interface the unified interface for the formata and search tools the switch of the manual viewer to the new implementation and all that is released last month with open BSD 5.7 so even if you install open BSD stable now you get all that part of that will only be available in the next not in the next but in the really a free BSD release after that free BSD 12 but free BSD has made even more progress than open BSD this year last year I could only say okay they have it in base but don't use it right now but is that was so has done tremendous work he has switched the default formata end of last year he's using the unsupported option I explained for ports manuals since March this year he has switched the search tools a week ago the code in free BSD is completely up to date with the latest stable man dog release all that is going to be released with free BSD 11 and the only thing he postpones until free BSD 12 is switching the man implementation because he's sensible enough to say we shouldn't change everything at the same time that might alienate users people might get upset if everything changes at the same time but very impressive progress here in free BSD unfortunately in net BSD and dragonfly almost nothing happened net BSD is using man dog as the default format are longer than free BSD but they don't have semantic search tools they have their own search implementation which is doing full-text search but lacking semantic search and dragonfly is still having it but not using it another system that made impressive progress is Illumos in previous years we often said we often cited Solaris based systems as examples of very old very traditional stuff that didn't even have any M.Doc implementation Illumos has decided to bit by bit translate all their manuals from man to M.Doc so the same thing Cynthia Livingston in BSD did in 1990 they are now doing that too I it will be interesting to see whether they completed in one year like she did I guess not but Garrett Amor switched the default formator in the same commit as he imported M.Doc into the base system so now they have M.Doc and they are using it not the newest version but it seems to just work for them that's the third system grand total who did the the switch and the first non-BSD system in Linux there are surprisingly there are two distributions who are completely relying on M.Doc both are very small ones Alpine Linux and Void Linux but they have everything they have the search tools they use them the manual viewer they have the latest release Alpine Linux was the first non-BSD system ever to use the the M.Doc based man and there are a few others Arch Linux has an official port slackware and crooks have unofficial one but none of the major Linux's really has picked up anything so far even though ports have been available for all of them unofficially and I'm regularly testing on Debian before releases we shall see what happens there yeah okay it would be nice it would somehow fit the Arch Linux philosophy more or less well let's just be patient and if anybody if they need help they should just come to us to Christophe's and myself and try to help them if they need any help other operating systems Minix has it in base but is somehow completely apathetic since five years or something there is a some kind of the user community or several user communities in OS X there are ports available both on homebrew and Mac ports there is even a halfway up-to-date port for for Windows such information is periodically updated on our website so you can see the status also between conferences so the status summary is fully integrated in open BSD alpine Linux void Linux except for man also in free BSD current the default format in net BSD and Illumos it's at least in the base system in free BSD 10 dragonfly and Minix official packages exist for free BSD 9 Arch Linux and PKG source and then there are a few having unofficial packages or outdated packages I'm regularly announcing goals for the future at conferences of those that were announced for were reached this year the replacement of the man CGI the integration of pre-con the switch to local output by default and replacing man in open BSD several things are in progress we are working on the Lib crypto manuals in Libre SSL and improving part to MDoc to facilitate that we are I'm unifying the parses aiming for better Roth support in the future that's a very complicated subject that I really couldn't cover in this talk if I were to talk about that I would have to give a full talk maybe sometime later the unsupported mode is still being improved we want to at some point delete all those redundant hard links in the file system as saying free BSD already does one thing free BSD already did a month ago but I only learned about it yesterday so it's not in the slides yet is that tech info is no longer in free BSD but is that was saying use the tech info to end up utility by Chris steps to convert all the tech info documentation to MDoc I'd like to do that in open BSD too this is a very nice example of free BSD leading the way actually what's a bit stalled is providing help with the man to end of conversions but that can be picked up again it's basically what Chris steps has started with the dog book to end up to it's mentioned two things that have not yet been started at some point I dream of using pot to end up but not for convergence with manual post processing that inside build system instead of running pot to man on the pearl manuals we could run pot to MDoc convert all the pot pearl manuals to MDoc format and then we would get semantic searching in pearl manuals in the open BSD base system for free and another thing one thing that is good about info not talking about all the problems that info has but one thing that is good there is internal linking with the manual pages so long as we don't change the basic way manual pages are built self-contained linear easy to navigate by hand if we get additional options for linking inside manual pages that would help a bit and one idea to do that was functionality similar to C tags that could be integrated into less but that really needs to be worked out to conclude I'd like to thank a few people Chris steps of course the original author of mandoc who again contributed quite some code this year for example the new eq u.m. parser html 5 and martin l output then of course Jonathan Gray who did extensive testing with AFL reporting more than 40 important bugs but he's done was so for tremendous work on previous the system integration and also sending source code patches Christian Weiss Gerber Nadia open BSD for removing use graph from many ports and helping with open BSD porting work Thomas Klausner with net BSD for PKG source maintenance while net BSD is lagging a bit in the base system Thomas is doing excellent work on PKG source Nataniel Koopa Alpine Linux who did system integration there and proved that mandoc can be used as the system manual from turn Linux Paul only shook also Alpine Linux suggested the implementation of man if it weren't for Paul who said oh why don't you implement man and I said well that's a bad idea last year at BSD can I say that's out of scope but then I stepped back and thought well actually why not we already have almost all the code that is needed it just needs to be shuffled a bit this year there were quite a few people who contributed patches and of course again even more people who reported bugs or suggested features
The original audio stream of my presentation at BSDCan 2015 in Ottawa (except for the first 30 seconds and the last four minutes; those two chunks failed to record in Ottawa, so i had to re-record them). The associated video stream contains the presentation slides captured off the beamer input by the conference organizers, so video and audio are in sync. Topics are the new man(1), man.conf(5), man.cgi(8); eqn(7) HTML5 and MathML output; UTF-8 improvements, afl(1) audit, -Wunsupp, pod2mdoc(1), a status summary in various operating systems, and possible future directions.
10.5446/19174 (DOI)
And everyone is already here. So I'm Nigel. I've been working on multi-part TCP and implementation for previous D for a couple of years now. So today, basically, I'll be going through a little bit about what the current state is of the implementation. But of course, multi-part TCP is not. Sort of generally, people don't generally know the protocol in and out. So I'll spend a little bit of time talking about the actual protocol itself, and then kind of an overview of the implementation in terms of what's kind of changed from how did things work with standard TCP and what's changed to enable multi-path. And hopefully, if I have enough time, which I should have, I've got basically a simple topology at the end, something that when the next patch comes out, there's sort of some documentation. And there's a simple example of topology for setting up some VMs and doing some multi-path stuff. And I kind of just show a little bit about how that works. So just on me, it's not too much exciting there, really. I did an undergrad in telecommunications engineering and networks. When I graduated, did a couple of years of network research, so in classification stuff, in QoS stuff. And then I left for a while and did some totally random kind of tangent career things before eventually coming back to the network research a couple of years ago. And that's when I kind of got back into this multi-path TCP stuff. And at the moment, I'm completing a master's degree, so post-grad. And I think it's called research enabling multi-path TCP for free BSD or something along those lines. So on the implementation itself, given that we're a research lab, our first kind of priority was how can we make something that we can use to do more network research, particularly with multi-path is quite new. There's a lot of different scenarios in which you can use it. There's not one simple solution for all of this. There's things to consider in terms of congestion control and how to schedule data segments and how to manage paths and all of that kind of stuff. So for us, it would be useful to have an implementation that kind of makes it easy for us to push different buttons and pull levers and have different things happen. But it's not just about being a research tool. It's also something that should hopefully be something that people can use. So if you have a particular use case in mind, at some point in the future, and if the free BSD multi-path stack helps with that, then that's a good outcome as well. And the last thing is interoperable with the current reference implementation and whichever other implementations pop up. So there is a Linux implementation at the moment. And being able to interoperate with that helps with standardization and that sort of thing. So kind of a bit of background. And so I might as well say there's a couple of slides in here, which is very similar to what I presented a few years ago when I was here. But a lot of it has changed since then. So hopefully it's not too much that's too familiar and boring for people who are there. Or hopefully there's enough new people that it will all be interesting. So I started working on the implementation in around 2012-ish with some funding from Cisco. And that was kind of the idea of, hey, let's get something that we can use for research. So I was working on that primarily. I was getting some help from Lawrence at the time, because essentially I'd just begun kernel development stuff. So for me, it was all very new. And getting some help with that in terms of designing and things like that was crucial. And so Lawrence has helped a little bit with that. So a patch came out, a couple patches. In fact, in March of 2013. So a little while after we started, I think it might have been 11 or 12 months. And those were pretty rough prototypes. So I did some multi-path stuff within a very restricted use case kind of list. So if you did this, it would work. If you did something else, you might get a kernel panic, or you might get something crazy happening. After that, I had to switch on to another project. So I was working on that for a year and doing the multi-path stuff in my spare time. So there wasn't a huge deal of progress over that period, as I kind of pecked away at it. And then so last year, around in the middle of last year, I started doing my masters and switched it enough for the FreeBSD Foundation to provide some funding for that. So for the first 80 months or so. And Cisco's provided some more funding as well, just for the last few months while I write a thesis. And so at the beginning of that, I was the last patch release, which was version 4. We're calling it. And then there's not been any kind of news since then. And I'll go into a bit about why that was the case, because basically I've gone through and redesigned a lot of the implementation. And so I was hoping by today to have a nice implementation that you could download, a new patch with some documentation and then some pull graphs, and all of this kind of stuff from testing. But the testing has kind of gone on a little bit longer than hoped, as is the case often. And so I'm still testing things, but that new patch should be out quite soon. OK, so what is multi-path TCP anyway? The easiest way to explain it, I guess, in a line, is that if you have a host that has multiple interfaces or multiple addresses, it allows you to use those addresses on a TCP connection. And there's currently a couple of implementations out there already. So there's the Linux implementation, which came out some years ago. But within the last couple of years, has become pretty feature-complete and is quite stable. And then there's some commercial implementations. So the other most known one is Apple's implementation for Siri. I don't think it's used beyond that scope. And I believe Citrix and a couple other companies have load balancers or proxies that they use, which use multi-path TCP. So why would we want to use multi-path TCP? Well, there's a couple of advantages that you can get potentially from using multi-path TCP. The first one is this idea of persistence and redundancy that you don't get with TCP generally. So you might consider TCP as being you have two addresses. If one of those interfaces disappears, you need to break that connection. That connection doesn't come back. You need to re-establish it. OK, multi-path has kind of an idea. It's called break before make, where you can lose all of your connections underneath. But you can keep that connection alive for a little while if a new interface pops up. So you can resume the connection later. So from the application, you don't need to terminate your TCP session. It simply stays there and keeps it alive. So the other two here, reduce condition and increase efficiency. Now, it might not necessarily be the case all the time. But if used in the right scenarios or if you use the right congestion control or so forth, then basically you can reduce congestion. Say you've got multiple paths. Once a bottleneck, you can use congestion control to steer your traffic away from the congested path and not clog a particular path. And of course, efficiency, basically, if you've got this extra capacity there, TCP doesn't usually use it, of course. So why not employ that extra path if we can? OK, so why extend TCP and not make something new? Basically, a lot of applications already use TCP. So we don't need to modify them in order to use NPTCP. So we can add this extra functionality without changing our applications at all. And one of the big considerations when designing the protocol was, how can we make it work within the internet as it is today? So how can it be made to work with NAT, middle boxes, which may not like protocols, which are not UDP or TCP? How do we make it compatible so that we can continue to use it, basically, straight out of the box with a new kernel? So here's the basic simple scenario. And it's one of many, of course. But the simplest one we know is that we all have phones with multiple interfaces on them, say cellular and Wi-Fi. And let's say we've got a standard TCP session on a mobile phone. If we move out of range of a Wi-Fi access point, then our TCP session is essentially going to end at that point and can't continue. If we want to continue transferring data, we need to set up a new connection. So in the multipath case, we can set up our connection. The multipath connection is aware that we've got multiple interfaces in this case. So we've got Wi-Fi and we've got a cellular interface. Let's say our Wi-Fi disappears. So we go out of range. It's able to internally then just transfer the traffic to the cellular interface. OK, so there's a little bit of terminology that I'm going to use. One of them is subplots. I use that a lot. So basically, if we look at this picture, we can see how multipath TCP works. So we have a process. And we have a socket. The process says give me a TCP socket. What we'll really get underneath is a NPTCP connection. And that NPTCP connection is going to then manage a bunch of subplots underneath that kind of transparently in order to spread data over multiple paths and so forth. So these green arrows here, there may be one or two or three. In this case, we've got subplots sitting on the network. From the network's perspective, that just looks like three unrelated TCP connections. From the processor's perspective here, so the application doesn't know anything about this. It just thinks it's using TCP. OK, so in order to set up some of this stuff and control these subplots and manage our connections, there needed to be some extra signaling. And so again, the least intrusive way in terms of being compatible with today's internet was to use TCP options to send some extra NPTCP information. So in this case, we've got a new NPTCP option. And within that option, there are a bunch of subtypes. Don't know if you can see them, but they're not super critical to see them now. But essentially, we've got stuff that sets up new connections, adds new addresses to an existing connection, provides some extra accounting information. So I'll talk about how there's an extra sequence space sitting on top of standard TCP, which is used to aggregate data. And we've got a few other connection close things there. So in terms of setting up a connection, quite simple. It's simply piggybacks on top of TCP's handshake. So one host may send a syn. It's going to add an NP capable. That's the option that says I'm capable of using multipart TCP. If the other box is capable of using multipart TCP, it responds in kind. And then on the last act, again, NP capable. And at that point, the session would be considered a multipart session, even though we're only just using one address at this point. So adding in another subflow. Well, there's a couple of ways to do that. One way is to advertise and say, hey, I've got this particular address available. If you want, you can connect to it. And so I've got at the top here a host sending an add add draw option. So we've established that connection. And now in one of the packets, we're saying in our options space, we're saying, OK, I've got this. I've got an extra address. If you want, you can add this into the connection. You can try and connect to me. So we're doing that on our already established interface here. If the other box choose to do so, it can send a syn to that new address with an NP join. So an NP join strictly relates to adding more subflows into a connection. And again, it goes through the handshake as TCP does. And at this point, then you can say, OK, now we've got two subflows between interface one and our host B over here, and interface two and our host B over here. And you don't necessarily have to advertise and address. You can simply join from an address that you have. You can join directly into a connection. And there's tokens that are used to identify an incoming syn. And so if you get a syn that has an NP join on it, it's got some information about which multi-path connection it belongs to. And at that point, host B in this case can say, yeah, I know I've got a connection. And I'll continue to join this. OK, so one of the crucial things about multi-path TCP is the accounting. So we've got TCP. And we know that TCP is a via stream. And then we divide that up into sequence numbers. And then we use that to track our segments and do retransmits and all that kind of stuff. What we've got now is that we've got multiple segments, multiple TCP subflows. And we need to then aggregate that data, again, at the receiver, let's say. How do we do that? One, I guess, kind of immediate thought is, well, you can just take a TCP space and spread that out over multiple subflows. Can't do that necessarily, because you may have a middle box that doesn't like big gaps in a sequence space. Let's say I've sent some data on one subfloor, sent some on the next subfloor, and there's a big jump in the sequence space because of how it's been multiplexed. OK, a middle box may not like that. So the solution for this was to add an extra level of accounting, so a data level sequence space, which sits above the subflows and maps our data as it comes out of the send buffer say. We map it into our individual subflows. Subflows retain their own regular TCP sequence numberings, so they look like regular TCPs. And then later on, we take care of aggregating all of those segments together. Since we've got two levels of data sequence numbers, or two levels of sequence numbering now, now we need to acknowledge up both levels. So the subflows will continue to send acknowledgments for their subfloor level sequence numbers, and the data level will also need acknowledgments. And just to visualize this to make it a bit easier, so let's say we've got some data to send, so there's 10 bytes here, say, and we've numbered them 1 to 10. So that's the data level sequence numbering. Now we want to map that data into two subflows. So in this case, we've got subflow 1, subflow 2, and we're going to map three bytes into each of those subflows. So the subflows now have their own sequence space, so subflow 1 is 50, 50, 152. Subflow 2 is in a different sequence space. But importantly, we've still got our data level kind of sequence numbers preserved here. OK, in this case, subflow 2 has, say, a shorter RTT than subflow 1, so that data has arrived for subflow 1, in which case, we can act at the subflow level, because, hey, that stuff's been delivered as far as the subflow is concerned. However, the data level is still out of order at this point. So we need to keep that in reassembly until we receive our bytes in subflow 1, which case everything's in order. We can do a data level act for 7, and we can do a subflow act as well for 53 on this subflow. This kind of just shows how it might look in a rough packet framing. So you've got your TCP sequence number, length, and so forth. And then you have an option which specifies what the data level sequence is for this particular data segment, and it also includes length and stuff, which I haven't shown. OK, so congestion control is kind of an interesting thing. But with multi-part TCP, in that now that we have multiple subflows, we can kind of look across all of these and basically change our congestion windows based on metrics of different subflows. So say a particular subflow has a lower RTT or anything like that, then we can say, well, across all my subflows, this subflow is performing better. I'm going to increase the congestion window by this much, and hopefully send more data on that path. And that's just by default, of course. You don't necessarily have to do it that way, because, for example, we've got the default congestion controller which says, well, in bottlenecks, I want to be very fair to other TCP. So if we have a standard TCP and two subflows here, we want to make sure that they're not summing up to a total greater than that one TCP. But if we don't have bottlenecks, we want to be able to say, steer traffic more towards the larger pipe in this case. So I've talked a bit about adding addresses and data sequence numbers and congestion control and scheduling things. So how does that actually work logically, or how does it look logically? Well, the session control block, it's a little bit like a TCP. Control block, in fact, is very similar. And that's going to take care of all the accounting, so things like what our next data sequence send is or what we're expecting to receive next at the data level. But there's also these other kind of logical components that you wouldn't have in TCP necessarily, particularly this path manager. So the path manager is going to be telling our session block, these are the paths that are available, or these are the addresses that I have. Maybe you want to join these, and they can signal and say, use this path now, or add this as a backup path, or add this into a striped round robin situation. We've got a packet scheduler, which basically takes care of a write comes in. Which subflow do we send on next? We need to determine that at some point, and package scheduler does that. And so for the moment, the built-in package scheduler is just round robinning, but that can be other things. And the congestion controller. So do we do a couple of congestion control, or do we just leave all the subflows to do their own kind of uncoupled, new Reno congestion control that can all be defined by the congestion controller? Of course, these things are the intention, I guess, because we want to be able to make this flexible, so we can do more experiments, is to have these all as modular devices. So if you want to experiment with different path management, so different ways of adding parts into a connection, or maybe not adding them into a connection, you can do that. Or if you want to change how packages are scheduled, if you want to, say, use parts with the lowest RTT first, then a modular package scheduler can do that for you. And the same with congestion control. So as I said, there's been a lot of changes between version 0.4 and 0.5, and it's been a long time between any news about any new patches. So why was that the case? Well, it was kind of a major design rethink. So in releasing 0.4, I basically went back and had to assess the implementation as to how well it was working, and how well could I, for example, maintain this into the future. So merging with head was becoming an issue, things like that. How much time do I need to spend merging things? How much parallel code do I need to maintain? There are certain advantages in how things were done in that initial patch. But in terms of maintaining and kind of keeping NPTCP codes separate from the TCP codes, so previously it was very much in twine. There was a lot of overlap. Every time something changed in TCP in head, then I would need to make a whole bunch of changes as a result of that. So the new approach, and perhaps not the most, well, in terms of performance, it may not be the best approach, but in terms of logically separating things a little bit and being easy to maintain, I think it was a little bit easier and better to change. And that required quite a lot of rewriting of pretty much all of the code, except for some option parsing. And another benefit of doing it in the way that it is now, that it's a little bit easier to add support for things like modular congestion control and scheduling and things like that, because I have more of an overall view of the TCP structure underneath. So what does it look like logically? Well, on the left here, you can see, this is what you might get if you draw up a stand TCP connection. You've got your socket, some protocol blocks, and you send your data that way. So multi-parts is basically ensuned in between the socket and the TCP layer. And what happens is that the multi-part TCP control block contains a list of subplows. Each of these subplows is basically a socket and internet control block and a TCP control block within. And so what does that look like in terms of how does that change how TCP behaves? So here is kind of a simple diagram of what TCP might look like. Let's say we get a data segment. We may need to reassemble some data, or maybe in order, in which case we can receive that or send that up to the receive buffer. Do we need to act that? Yes, we've received the data segment. We may update some accounting, and then we can send our act out that way. So how does that change now that multi-part TCP is involved? Well, let's say we've just got one subflow and a multi-parts control block. So our subflow, a data segment comes in. We can still act that at the subflow level. But we have some data that needs to be delivered to an application. So we pass that up to the data level. We check the data sequence numbers at that point, reassemble if we need to. If it's in order, then we can deliver that data. Do we need to act what we've just received? Probably in which case we choose a new subflow and say, hey, you, can you send the data act for me now and send that out. So in terms of the structures themselves, how they look. So if you're to create a TCP socket, you'll get something that looks like this. So on that far right, you've got some socket buffers for sending and receiving data. Down the middle of you've got your protocol blocks. So you're in PCB and your TCP control block. We're just going to be tracking all of your TCP statistics or your accounting and so forth. And you've got these protocol hooks down the side here, which say, my socket is going to send some data. So let's call the appropriate TCP function to send that data. So that has changed now is that we basically try and retain as much as possible the structure of the TCP socket underneath. But what we're really giving you when you ask for a stream socket is this multi-part structure. So a lot of it is replicated and based on what TCP was. So we've got our send and receive buffers now that we use at the multi-path layer, a multi-path control block there, and functions for handling sends and so forth. But now if we say, try and send some new data, we can check in our list of subflows here. Say, run some packet scheduling, something like that, and say, OK, I'm going to use a particular subflow. And then we can call on that subflow a TCP function in order to complete that request. So I'll talk a little bit now about how the send and receive kind of data structures have been changed a little bit. So TCP, let's say, we've got a send buffer and we've got a control block. UNA here is bytes that are being sent but not acknowledged. And send next is where we're going to send next in our sequence space. So let's say we sent some data that hasn't been acknowledged yet. OK, it eventually gets acknowledged. We can move UNA forward, drop that bit off the end there, and so on and so forth as we work through our data stream. So as I showed before, basically retaining the socket structure of TCP but kind of opening that underneath multi-path. So in this case, we've got a multi-path send buffer. And then below, we've got a couple of subflows here. Each of those has their own send buffer and their own, obviously, TCP control blocks. And what we do is we can map data using the package scheduler into different subflows. So in this case, we've created a map. And the map says, OK, you're going to send this much data from the send buffer. The data sequence starts at this point so that you can put that in your TCP option, saying what the data sequence is. You've got this much data now. So you can go ahead and send that. OK, so once subflow can start sending away, let's say we get another write coming in. And now we want to map all of that new data to another subflow. So we've mapped that to the second subflow. We can see that this subflow here has sent some data in the meantime. It hasn't been acknowledged yet. And now, just to show, we can basically map non-contiguous data onto a subflow. And the subflow kind of doesn't know the difference, really. Let's put another map on that first subflow there. Again, the subflow is kind of act independently. So as they send data, the acts come in. They drop it from their send buffer there independently. Eventually, we may get a take a level lack at which point we can drop the data there. Both of these will disappear at that point in that memory stream. And what I haven't kind of illustrated on this map here, on this diagram, sorry, is that this mapping that occurs from the main send buffer can actually be replicated across multiple subflows. So let's say we've mapped this section of data to a subflow here. We can take that same bit of data, map it to another subflow, transmit them both at the same time, whichever one acts us first at the data level. We can drop that. The other subflow can continue to try and send it and just drop it locally when it's act at the subflow level. So OK, you've dropped that first chunk up here. So that can be taken to the send buffer. So then what happens when you get the first chunk act from the interface on the right there? You can update the ESN UNA because the stuff that you just sent to the first subflow which hasn't been sent yet. Don't quite produce repeat that one. So you sent the first part and it got dropped off. Yep. Now before the rest of the first subflow finishes, part of the second subflow completes and it's eligible to be dropped. Right, yeah. So there's no kind of basically it's cumulative. The acting at the data level, you can't drop any data until. So for example, and this is part of the headline blocking. So what you're saying is that if this data is sent and received and acknowledged first, we can't actually drop it until all of this is done. Yeah. So that's part of the one of the issues in scheduling really in that you don't want to create headline blocking for yourself by say sending your first data on a slow subflow, sending the rest of it on something that's quicker. That needs to be buffered at the receiver and you can't actually clear it out because you don't get acknowledgments for it. OK, so talking about receive structures then, this will kind of cover similar territory. So this is what it might look like on TCP. So you've got a segment reassembly list for segments that have come out of order. And let's say in this case, data segment two is missing. One is being received. It's in order because that's what we were expecting. So we can append that one to the soccer buffer and that can get sensitive application. So how does that change now? So at the moment, we don't use a receive buffer on any of the subflows. That may or may not change in the future. But for the moment, let's say we're receiving segments. And again, we've got one, we're missing two, and we've got four. So there's a temporary structure here called a segment receive list. So at a point where you generally append to your receive buffer, it's putting it in this separate list here. This little s here is basically saying, I've got some NPTZP signaling. It's not relevant for the subflow to process that. Let's say it's a data rack or something like that. What you want to be doing is passing that up to the multi-path layer to process that and respond to that. So in that case, we've got a segment with some signaling on it. So we're going to in-cue that one up at the same time. So at the multi-path layer, we are using the receive buffer here. But we also need to do segment reassembly. And so this kind of relates to the headline blocking stuff. So we've got our second data sequence number. So DSM2, that's kind of arrived early. We don't have one yet. So we need to buffer that. We can't acknowledge anything at that point either. But now that we've received this, it's going to be transferred from this segment receive list into a multi-path control block, which essentially has a list of segments that are coming in from each of your subflows. And it's going to process them a little bit like TCPD segment. So we've got all a bunch of subflows, which will be appending segments into this list here. The multi-path thread will eventually run at some point. And at that point, it will process your data level segments, any signaling that's arrived. So that doesn't have any data on it, but it's got something that we need to process. That will be processed at that time. And reassembly is done if necessary. In this case, we've got our next expected segment. So we can append those to the receive buffer. Those can be delivered to our application. What's happened here, though, is that we still haven't received segment two on our subflow here. I'm not showing it here, but what can happen is if that takes too long, that subflow may go into retransmit, say, we may get new segments again. Or at the multi-path layer, we may say that this subflow is performing too slowly. Let's try and transfer DSM four and five on a different subflow. OK, so I kind of raced through that. So sample topology. So with the patchless and documentation there's kind of like a baby's first multi-path topology that's described in there. And I'll kind of go through what you might expect if you do the grab and patch, eventually. A simple experiment that you can do with some VMs, just to see how things work and find out for yourself. So in this case, I've got two hosts, two routers in the middle, the routers are running dummy net, to rate limit some of these connections. So subnet one and subnet two are rate limited to 8 megabit per second. OK, basically, host one is going to connect the host to transfer 50 meg of data. There's no packet loss in this network, so the queues are quite deep. So we do get a lot of RTT. And depending on what I've configured in the path manager, we may get one subflow, we may get two or three. So I'll go through that. So the first example is just a single subflow. So what does a single subflow look like? We can connect it up. We've got our rate limit. There's nothing too different from regular TCP happening here. And when we look at the throughput, it looks roughly close to 8 megabit per second, and nothing much exciting happens. So what if we make it a little bit more interesting? So now let's set it up so that host one connects to both of the interfaces on host two. So we've got our initial connection, which is going to be this blue one here. And then once that is established, this red line is going to be joined into the connection. So the interesting thing here is that they're both kind of traversing a bottleneck. And again, I'm using round-robin scheduling. So that's kind of the basic implementation that I've got at the moment. So essentially, every time the process writes some new data into the send buffer, it's just going to stripe between different subflows available. So we get roughly 4 megabit per second. Well, that's what we were kind of expecting. They're both sharing that same link. They're both using uncoupled congestion control as well, I should mention. So they have their own congestion windows. And the multipath layer doesn't really interfere with that. So what if we then try sort of an additive connection? So we've got these two separate 8 megabit per second links. We'll again establish this blue connection first and then add in this yellow connection afterwards as an additional subflow. And so we don't quite get what we would expect. Sorry, that's the wrong graph even. Oh, no. So that's a per subflow 3 foot. And so we're getting about 6 megabit per second. We'd probably think we'd get 8 megabit per second. I'll talk a little bit why that might not be the case here. But what we can see is that, OK, we're getting a little bit more than one of those links on their own. So we're getting 12 megabit per second. So where things get kind of interesting, and this diagram is a little bit colorful when there's a lot of lines. But basically, the path manager, by default, if you say an address is available to a connection, it's going to try and join them all up together. So in this case, we've got two addresses on each of those hosts. And they're all going to try and connect to each other. So we wind up with four subflows. So we've got across the top here. We've got down here, the red one as before. And then we've got this extra green one, which kind of connects the second address on H1 to the second address on H2. So what does that look like? Well, it looks quite slow, actually. So all of the subflows wind up doing about two megabit per second. We'd really expect to get kind of accumulative throughput of, say, around 16 or whatever our two great limited paths are. So why might this be the case? Well, if we just look at this top subflow here, so if we can kind of compare what's happening with this subflow across, I've only got a single subflow. Now I've got two subflows. Now I've got four subflows. What's happening just with this one subflow in terms of the send buffer? Well, it's spending a lot of time not sending much data. And that's why, because we've got such a long RTT, we've got a 32K send buffer by default, which is now being divided up each time across four subflows. This long RTT is kind of absorbing everything. We're not filling the bandwidth that we should be filling up. And so basically, these subflows are spending a lot of time with nothing in their send buffer, not sending anything. And this is kind of like an interesting point in terms of the kind of things that you see or have to start thinking about with multi-path in terms of, OK, how do I handle all the aggregate traffic here? In this case, the 32K send buffer is not clearly not enough because we're not going to service our subflows with enough data. So a little bit of status. Basically, I'm just doing the documentation and I'm doing some more kind of testing for the next patch release. So the previous patches have been a little bit buggy and being not so quite so easy to use straight up. The intention this time is to make sure it works quite well. It covers these kind of simple scenarios with round robinning, with adding new addresses in. And basically, not at the moment using couple congestion control, but it should essentially work and you should be able to experiment with it and rely on it a little bit to keep working. So I should acknowledge a couple of institutions at this point. So the FreeBSD Foundation, which comes on something a little bit different in terms of funding my masters. I don't think it's something that's commonly done. So they've allowed me to continue on with this work this year. Again, Cisco, who has provided funding on a couple of occasions for the multi-path stuff. And of course, FreeBSD can for allowing me to come in and talk a little bit about MPTCP again. And hopefully, pique some interest and maybe get some people interested in taking a look at the patch in the future, maybe providing comments, criticism, or help in any way, which would be good. And of course, there are some links here. That's my contact. That's the web page where I host basically the patches and things so far. The idea is that after the next release, there will be some kind of public repository where people can grab the source code. But I'll update that as well, because it's a little bit out of date. But basically, anything to do with the project is kind of on there, all the documentation, all the patches, and all that kind of stuff. Are there any questions? I think we have questions. Enough time for that? Yep. Hi. You said you were using the city options to make it work. Yeah. How do you deal with the fact that many voters will drop packets with city options? Yeah, yeah. So I didn't talk too much about the protocol. But this is one of the things that was considered. And so there's kind of fallback mechanisms that are being built in. So let's say we do try and open up a new MPTC connection. And that MP capable option is stripped off, because it's not recognized. From that point on, it will just continue on as a standard TCP connection. Let's say we establish a TCP connection. So the MP capable works. But then later on, one of the data mapping segments is dropped. So that will be able to be detected. And then it will fall back to regular TCP. So that kind of stuff, yeah, there's a lot of that. And a lot of people have done testing. I haven't done much of that stuff myself. But it's kind of documented in the similar couple of years, looking at that sort of thing. And basically coming up with contingencies for that. Hi. Is yours multi-faceted TCP protocol? It also supports all of the currently supported TCP options that you would like to select different knowledge from the source? Yeah, so SAC works. I haven't tested it extensively with everything, but yes, they should work, but I haven't tested them. Basically, the TCP's underneath can work as a TCP works previously. So the multi-faceted stuff is just the sequence numbering and to take care of reassembling stuff at either end, mostly. So stuff like SAC and all that. So SAC works still. All the retransmit stuff is done in a standard TCP kind of way. Here, congestion control handles both directions in very symmetric lengths. I end up in a lot of situations where I've got high bandwidth, high jitter lengths, low latency, low bandwidth lengths. So you've got two. I've been doing a lot of this later too with kind of spraying and such, but this. OK, so in terms of congestion control works, both senders can use congestion control. Is that what you're asking? Yeah, there's congestion control. Yeah. Well, I guess you have more than two directions going on, but does it handle asymmetric paths? You've got a lot of bandwidth this way on one, and a lot of bandwidth this way on the other. Right. In terms of moving data to an appropriate. Yeah, so depending on the congestion control algorithm that you use, they can't all serve slightly different purposes. But one of the things can be you can use some detection or you can use RTTC and try and use that to grow the congestion window more on subflows, which have more bandwidth available. And you're going to send less data on your lower capacity link. You can do that with congestion control, or you can do that with path management as well. It's scheduling as well. So let's say you have a little bit of information about a particular path and that you may want to prefer. You can use the scheduler to map all data to that one. Right. And then situationally, you have no info. And when you have no info, you have to rely on the algorithm to work. Yeah. So a bit more on estimatory. Let's say a subflow can't send, but the acknowledgments cannot come back through the same path. Can they still use another subflow? I mean, can you use another IPF or another subflow to send those acknowledges back? So you want to use a path purely for acknowledgments? Essentially. Well, if you're talking about at the data level, then yes. So if you're sending on one path, it still needs to receive its TCP level acknowledgments some way. So if that comes back by another path, but it ends up at your right interface, then that will work. You can say send your data segment acknowledgments on a completely different path. If you choose, you can say always nominate a particular path to send those on, if that kind of answers it. But if you're talking about at the TCP level. Yeah. One sends in one direction. And let's say they're bi-directional at the beginning. Eventually, one direction for one of the subflow is on rails for you. Yeah. Then at that point, the acknowledgment for that subflow, because if I understand correctly, it's like you have all these meaning control blocks for these essential subflow. So you have meaning TCP connections going on with a similar problem. So can these acknowledgments coming back from that subflow one, keep it back on subflow two so that you can still keep your subflow one sending, but yet? OK. So basically, in that case, if you're not getting your acknowledgments back at a TCP level, your subflow going to retransmit in timeout or whatever. If you're talking about at the data level, if you can't send anything back on that path anymore, then it'll use the other one. Yeah. It'll use the other subflow. And so what would happen is that if you're sending data, you're not getting acknowledgments back. Internally, you can say, well, this subflow's gone into retransmit. Let's take all that stuff that was outstanding on that particular subflow and just send it on another subflow. But at that point, you'd be using only the bandwidth available? For the what ifs remaining? Yeah. Yeah. So when you're adding a subflow, TCP has a server that's listening and a client that connects, right? Uh-huh. Does the client have to initiate all the additional subflows? It can be either generally. So how I've done it is basically assume that, OK, let's get the client to connect in first. Let's say in the issue, I guess, if you might consider a lot of clients being behind that. So whatever you can't really have the server connecting back into a lot of clients. So in that case, the client, if it's multi-home, it'll connect into your server. And then if it's got another address, it'll just join that in as well. Yeah. The server can advertise that it has other addresses available, too. Yeah. No questions? No? I'm ready. Uh. Um
Come with me on a journey to learn about the Multipath TCP (MPTCP) protocol and the first publicly released FreeBSD implementation. This talk will examine MPTCP's 'wire' characteristics, the architecture of the modified FreeBSD TCP stack, observations from the development process and results of both performance analysis and empirical research conducted using the stack. Multipath TCP (MPTCP) transparently retrofits multi-pathing capabilities to regular TCP and is a work in progress Internet Draft being developed within the IETF. The Cisco University Research Program funded the Centre for Advanced Internet Architectures to develop an interoperable implementation of MPTCP for FreeBSD as part of a research project to study mixing loss-based and delay-based congestion control in a multipath context. As a researcher on the funded project and lead author of the FreeBSD MPTCP implementation, I've data and insights to share with you about the process of going from stock FreeBSD and an IETF Draft to an interoperable MPTCP implementation that is being used in ongoing research programmes.
10.5446/18655 (DOI)
For attending, so I'm Betty D'Aroussin. I work for Gandhi.net as a system engineer. And I'm here to talk about packaging base. So what this is about. Since I started the new package manager, received a lot of mail from people asking me, can we package a base system so that we can upgrade things? We can decide on what we will install, what we will not install, et cetera. So I figured it was a hot topic which happens quite often in the history of the project. And I went to the mailing list. I asked a lot of people what they do expect. And well, basically, it's done a bit like this. So I tried anyway to figure out what people were expecting from that. And well, basically, there were people were there. Please do not ever touch how we do provide FreeBSD right now. And while those people forgot that before FreeBSD9, it was split on different multiple sets that you could choose to install or not. So it was already split somehow. Then we have people that ask us, can you allow us to have a very, very minimal installation, in particular, if you're in a big area, you don't want to install all the fancy stuff we have in the base system. We just want the minimal binary set we need to run the system. We also have people that say, I don't want any toolchain installed on my server. So why can't we just install FreeBSD without the toolchain and all related tools? I got the people that say, I don't like Sandmail. I don't want Sandmail being installed. I don't like having BI have been installed or whatever components which are part of the base system. I don't want them to be installed. But I want the rest of the system. We have people that say, I don't want any development files. Why do I have those A files that takes a lot of space on my hard drive or those H files, whatever? And we have people that say, I don't care having any documentation at all on my system, because I have Google. I have whatever. I have another box where the documentation is installed. So here I don't want it. And there is people that say, OK, that's cool. I installed the FreeBSD release. And I want actually to be able to debug things. But we don't provide any debug files. So how can I install them now that I want to debug this? Or this software I'm developing using the FreeBSD different libraries? So that's kind of tricky. But I think we can find something which will satisfy most of the people. So for the people that ask us not to split FreeBSD at all, what we could do is we could provide meta packages, which mean a very high level package that doesn't start everything. So you would have the FreeBSD package by itself, which will install the kernel, the base. So if you want to install something, just package install FreeBSD. And you'll have the stock FreeBSD as you expect it to be. And you'll be done. If you're just on the base system, because for example, you're installing just a jail, you don't need the kernel, then you have a package called FreeBSD base. You have the kernel which could be separated, the docs. And we could do other set like this, like minimal, et cetera. So for people asking not to split, we can provide the same behavior on installation that what we have right now. For people that want to not take, to have a minimal installation, we can provide a meta package that makes sure that you only have what you really need to run your system. And if you need something else, then you have the other package to upgrade. By splitting the base system into smaller package, we can also decide that we have a package which is dedicated to bring the tool chain, to bring the debugger, to bring, I don't know, the linker, the compiler, whatever. And there is a lot of components we have in Contrib that may or may not, people may or may not want to install in the system. We have a lot of macros in the build system that allows you to cherry pick exactly what you want in your final installation. So we want to provide the same mechanism for people using binary installation or binary upgrade. So for example, some mail could be moved into a separate package. So it will be still the integration we have of send mail in the base system. It will not be the stock send mail from upstream. But you install it when you want. Same goes from a pen SSL, beehive, and a large bunch of tools we have in the system. We want also to separate the runtime from the development file. So I know this is one of the strongest I shed from this discussion because all developers says, well, I have this library. I want to hack on it. I need all the header. I need all of this. And so why would I have to know that I need the developer thing that goes with it? But if you think properly about it, we have the meta package development. So if you end up doing development, you just package install development. You have all the headers, all the A file. You'll have all the tool chain that goes with it because it will be a dependency. So basically, if you want to do development, you will have all you expect from the system. But people on the embedded area will only have the runtime where which is what they do expect. The documentation, we already have a set which is called documentation. Basically, it's only everything which is in shared docs. Basically, most it's papers and documentation on various areas of the system of FreeBSD. Having a package, FreeBSD doc package with concern only the main page makes no sense. So the main page will be along with the runtime or development given the kind of main page it is. And the FreeBSD docs package will concern exactly the same as the doc set we have today. And we'll provide something which we have never been able to provide until now, which is for every single package that has a binary, we will provide the symbol as a separate binary. So if someone has a problem, has a site fault on the binary, they can just install those debug files, start the debugger, and try to figure out how to have a back trace, or whatever, try to figure out where is the problem without having to rebuild the system with the debug activated. So that's basically how we want to cover all the requests from I could find all the other places for the users. So why do we want to package the base system? We want to package the base system because we want to do binary upgrades. Right now, we have only one tool to do the binary upgrade. It works. It's FreeBSD update, and it allows you only to upgrade your system to get security patches on a release or to get from a release to a release. But if you're someone that wants to track a current or track, it's a stable version, then you have to build everything from yourself. And this is not really user friendly. So if you want to get more people involved, more people tracking head, more people tracking stable, they need to be, all those users need to be able to upgrade their system and keep in track with the branch they want to test. And this allows us as well to get more feedbacks because we will discover the issue we have with stable before we get into the release phase. We want also to allow people, as I said before, to do fine-grained installation. And we want to allow developers to be able to provide a new set of package. So imagine, let's say, the Beehive people want to allow the user to test a new version of Beehive. They don't want to commit it yet because they're not sure about some part of it. And what they could do is create their own package of Beehive with the build system we have. Put it somewhere on the internet and say to the user, can you add this repository to your package configuration? Do you package upgrade? And you'll get the latest shiny Beehive, and you will be able to test the new features before they enter into the base system. So that can also allow us to test a lot of things like a new Libse++ library or a new Clang import or a lot of things like this. So actually, we could have more testing done before the code is hitting the source tree. That's also allowed to do fine-grained merging of configuration file, which basically means getting rid of those merge master and ETC updates. Right now, everything in the ETC directories is trying to be merged, even things where there is no reason to merge them like all the content of RC.D or the content of the default directory and things like this. With packaging base, what we could do is we could cherry pick which one needs to be merged because the user is supposed to be modifying it and which one will always be overwritten. And I'll explain a bit later how it works and how it simplifies a lot of things. And we have a huge problem right now when we do install world. It's basically everything related to the loader is half updated but not entirely updated. So if you're running a current system, for a while, look at what are your loader.rc and stuff like this, and you'll discover that they are a bit different than the one you have in the loader, depending on how you do your upgrades. So the fact that we do package those, we make sure that we upgrade the loaders and their configuration each time we do an update. So the goal we have is we want to make it very simple for users to generate their own packages. So basically, we want a very high level target at the root of the source tree where the user is just running make packages. It will do everything, build world, install this in the staging area, create the packages, prepare everything, and then put it in the place where you just have to push that on your HTTP server or the way you want to provide those packages and they will be ready. We want to allow you to build as a regular user. You don't need to be roots to be able to build those packages. Roots will still be able to use those packages, but a regular user will just check out the tree, make packages, and get those packages ready, and you can install them everywhere you want. We want reproducible builds and reproducible packages so that if you're in the same tree, you build twice the same source, then you get twice the same packages with the same checksums and things like this, which will allow us to do a very simple upgrade on a very simple mechanism to be able to do security updates and Erata and things like this without having to discover that, oh, magically, this file has been modified because one of the headers has been patched and we didn't know. So in that case, because it's reproducible, we can go through all the files and say, OK, this one is different from this one, so this package has been modified by my modifications, so I will just bump the version of the package and the user will just install the new stuff. We want to, so that was the automatic bump of the right package when patching a release. We want to automatically handle all the configuration files, so basically, the package knows what goes where and what needs to be merged, and we wanted to be cross-installed. So I'm on a system. I'm MD64, and I want to build packages for my inbox. So what I do is make package with the magic flags to do that on ARM. I create those packages, and then I can use package, dash RZ directory, and install everything in there, and I have a directory ready for ARM. Actually, that works. And one of the things I haven't done it is we also want to be able to upgrade an inbox on a cross-installation way. So basically, what I do right now in it works is I have a panda board at home. I have a SD card where I have set up a free BSD with the packages. If I want to upgrade, I build a new package. On my MD64 box, I remove the SD card from my panda board, put that into my laptop, mount package upgrade dash RZ directory, and then unmount. I can reboot. It's ready to be usable on my new panda board. So how we want to do versioning. If we are splitting the packages, then we need to make a proper version. So basically, the version, even for the component like OpenSSL, like Sand Mail, et cetera, will represent the free BSD version and not the upstream version. Why? Because we may have patched them. We want to make sure that the user knows that this is the version shipped with free BSD, this version. So what we will do is, when we are tracking current, so the goal is to really that for 1.0. That's why current is 12 here. So when we're tracking current, what we'll have is, we'll have the major version here, S for snapshot, and the date when it has been built. So each time you have an upgrade, then you have all the packages to install. When we're tracking a stable branch, we will have this mechanism. So we'll have the major version, the minor version, of the next release and the snapshot, saying that we are in a snapshot between this release and this release. So a user being tracking stable knows exactly that the stable version he has is the version which is after this release and before this release. And for the releases, what we will do is during the alpha phase, then we'll have the 11.0, which is the release, A for alpha, and depending on the number of iterations, the release engineering is willing to do on the alpha phase. Same goes for beta. We have just a special one for RC, which instead of using the R, which is a usual thing for RC release candidates on package, we'll use a P because people could be confused with the word release and thinking that 11.0 or something should be actually a release. Then the release itself will be the number as you expect and we'll bump the latest version here so that the security fixes, each time we have a security fixes. By the way, that matches how right now we do in UXML mark security issues. So no change needed at this point. Yes? I think the way it works now, and so we're threatening along, but for previous, could you previous the update? Yeah. I think that's going to be a source. Would you choose C or something? A, B, and C or something? For P, X for each patch, only if you get the update. So what you know up there, while depending on if you were avoiding confusion. We'll make a confusion which I haven't thought about. So I like the overall trend. I think those that are typically articulated in terms of three, I think that one might be a little bit. OK, what? We'll think that one. OK, we'll think that one. I mean, yeah, I accept that package. I don't know yet about C, but I can think that. Candidate. Well, so we are at the beginning of this project. So this is exactly when we want those kind of feedback. So yes, if someone can take in some notes, well, we have the video for the notes. OK. The package. There'll be important things. So when packaging the base system, I discovered that package was not good enough to do everything. One of the things is in the base system, we have a couple of immutable flags set on some binaries. We do not have that in the port three at all. So basically, package had to undo them so that before upgrading a file, it checks if there is a immutable flag. Remove it, add the new file, and then we add the immutable flags if it is in the new package. So that's something we have added into package 1.5. And we needed the ability to undo the configuration files and merge them. This is tricky because I really, really, really didn't want and still don't want to make my own three-way merge code and stuff like this. Hopefully, there is a BSD license, VCS available out there, which is named Fossil. And the code doing that part was pretty isolated. So I was able to extract the code from the Fossil VCS, bring that into package. So now we have a new keyword in the P list. So P list is the list of files we have in tonally to H packages. And these ad config keywords mean that, OK, if something has this keyword before, then keep a safe copy, a baseline of what the file was, install one of the files to the system. And if the user changes it during an upgrade, we will figure out that we will be able to have the baseline, the new file, and the modified file of the user so we are able to do the merge. Yes? As various config files get converted to UCL, will you need a different keyword or something? Because hopefully, UCL will cover its own verging method and you might want different kinds of verges. So when we will change a lot of things to use UCL, in that case, what we will do is having some file. The default will be somewhere. And we don't have to touch the one modified by the user. So I won't have to need to merge anything. So this is more to handle the things that are not sane in terms of upgrade, where the configuration file and the default we provide are the same files so we need to merge. And there is no equivalent of rc.conf.d, for example, where we can put things. But in long term, we should have less and less ad config files and have a saner way to handle the configuration files. So some people might not trust a three-way merge code to do the proper merging for them. So we added also an option in package so that if you don't like this behavior, you can just disable it. And what package will do is it will create a file along with the modified file and it will name it dot new so that you can go through your configuration files and say, OK, there is a dot new. I will merge myself because I'm more confident with me than with the merge code. And we needed support for cross installation. Right now, cross installation on package was done via chroot. So we had package-lowerc directory and package was chrouting inside its directory. While it works, it's not perfect. It's not perfect because it does not allow a regular user to do cross installation and we want regular user to do cross installation. It's not perfect because it was chrouting very early in the process. Then your package had to be also in the chroute and the pass to install the package you choose should be relative to the chroute. So that was not great. So we added something which is the dash R. So please don't use that with ports yet because the port has no idea how to do that properly. But the base system is able to do that properly. So dash R will just say that this is what will be my root directory. Install everything in there and be done. Yes? I think you said an immutable flood with dash RSE user. In that case, you don't set the immutable flags. That's part of the issue I'm coming to later. So the last point is, well, I'll reply now to that. So one of the plan which is not done yet is that package will also be able to emit an entry so that when your immutable flags and whatever, when you're cross installing as a simple user, you'll have an entry. So then when you run makeFS or whatever, you can fit it with an entry and the generated image has the proper rights, the proper credentials, the files, the flags, whatever. So that the plan. That is not implemented, and that's something I plan to implement for 1.6. Another thing is we will have to generate to execute a couple of comments at the end of the installation. For example, we want to install login.conf. So we have to run cap MKDB on it to generate the DB file. So we added those script into package. But if we are using to install with dash R, in that case, you will need the script to be aware that where is the directory that will go. So they will execute the command from the host and generate inside the host DB. So how did we hook into the build system? The goal was to be the least insensitive as possible. It's kind of tricky given how is the build system right now. But I think we managed to do something not that bad. So first, we needed to be able to figure out which file goes to which packages. So for that, we discovered that we had a flag which is named NoRoot, which will probably be renamed because it's nowadays confusing for users that NoRoot, if you run it as root, is still doing something. So basically what it does is it enforce the install command to generate m3 of what is being installed during install world. So with that m3, what we get is we get the list of the file that should be packaged at the moment and we get the mode they should be running on, the flags they will be using, and all this stuff. And the m3, since we updated to, I think because of the update to the netBSD m3 version, so Brooks might know better than I do on this part. But it has the nice feature to be able to have tags. So we abuse those tags. And what we do in the end is each file when installed will be tagged with a couple of tags to explicitly say, OK, this is a configuration file. This should go into a package name that way. And this is a development file, whatever. This allows to generate automatically. I have a small org parser which converts the m3 into something which package is able to understand, so into the plist. And with that, we don't have to maintain manually the list of files that goes to each package. Next step is for packages, you need metadata. We don't have yet in the sets. So metadata means we need a name for the package. We need a comment that explains in a short version what the package is about. We need the large description of the package. We can put the license in it. So that also means that you can cherry pick the installation based on the license you want. You don't want CDDL stuff. You just have the information that this package is CDDL or not. And for that, we use the UCL format. So we added a new directory into the release directory where packages where we are defining this UCL. That's also in those UCL files that we described, post-install, post-install script. And we added a couple of targets to simplify our life. So the new target is makeStageWorld. StageWorld is a bit like a mix of installWorld and distributeWorld. So installWorld is not touching at all the configuration file. DistributeWorld is doing it. So what we do is we decide to install everything into a staging area, including the ETC file. Then from this staging area, we create all the packages based on the meta log we have obtained. So based on the entry we have obtained. And we have the high level. We have, I forgot, one of the targets here. We have a target which is createWorld packages. So you only create the packages from the world, expecting you have already run StageWorld. You have the same for Kernel. So stageCernel, createCernel packages. And the high level packages will do everything from buildWorld, installWorld, buildWorld, StageWorld, createWorld packages, and be done. So how do we populate the tags all over the place? So one of the things I didn't want to do is to go to every single make file we have in the build system and add the tag manually because it's painful. So hopefully, most of the binary with the build goes into the runtime package. So what we do is I have been into all the BSD.something.mk. I'm trying to figure out at the moment to install what kind of thing is being installed. So if I'm in BSD.prog.mk, I can say, OK, my default tag is package. But if I'm in BSD.lib.mk, the SO file is going to runtime. But the SO sim link and the AFile is going to development. And I've been through those files and just adding the tags automatically. So we have basically three kind of tags. We have a tag which begins with package. So that basically is the name of the package. Each thing like this, this means that in the end, that will be free BSD dash runtime. We have development. This is to know that we will make the same package, except we will add development onto it. And all those files goes in the same package name. We have config. And config is not related to the package name. It's related to what I said before. This is going to be merged. Now you want to cherry pick your files. You want to say, OK, for this particular case, I don't want it in the default for BSD runtime. But I want it into a package. Let's say if I take the case of beehive, I want a package dedicated to beehive. So what I just have to do is a new macro called package. I add this macro. And automatically, I'll have a tag here, which will be package equal macros. Those both two tags will continue to be shipped automatically, depending on the kind of files it is. So you just have to go in the directory. You want to separate from the default runtime. Add this macro. And you have automatically a new package. And because you will always forget to add the metadata, when you create a new package, the build fails on purpose, saying that it cannot find the metadata if you haven't done it. So you just go in the release packages and create your metadata file that matches a new package you've created. So that's basically how we hooked into the bear system. So now we have a couple of issues. Yes? Are you able to do that at the subdirectories, for example? Yeah. So you could put it at the top level, and make file.inq, in some particular subdirectories? Yes. SVN is done like that, for example. SVN lights, you go in the make file.inq at the top level. And everything goes directly, automatically, into SVN. So now here is the issue we had when dealing with no-root. So by the way, this is dealing with no-root. It has a regular user. That's the issue we have. So I don't know yet. I haven't figured out why right now. But when we do install the entry into the staging area through the regular install world, what we get is we get those error messages, which are expected somehow because of the rights we apply, but not very nice to see when you're building. So that needs to be. Sorry? So I have committed a hack to say I want to find out. Wonderful. The other thing is, why is it outputting this into the std out and not std error? Because I was trying to figure out all the errors, so I redirected two to a file, and I couldn't get those. So the other thing is, we have not much immutable flags anymore, but still we have some. And you have sometimes this that happens, which is logical because we do first install a binary with a mod where it's read only, and then we try to apply C-H flags on it, so it fails. So as root, it's magically hidden, but we need to figure out. And the other thing is, we have hard links done on some files where we applied the C-H flags. I mean, the file is immutable, and then we try to have a hard link on it, which will fail, obviously. And I think I fixed all of the meta-num-tron I need to check out. Some of the things are not using the install command. And if you're not using the install command, then you are not ending up with your file inside the entry. So you're not able to figure out that these files need to be committed, to be in the package. So we had also a couple of issues with the block, which, in my opinion, are blocking the way we do install world. Well, stage world, which is kind of install world. It's please someone have a look at those two guys and figure out why. There are the keep, well, I know why, but if someone can fix it, it would be better. They keep installing the same file all over the place. So given that you have two programs you want to build, and you install a file, you say you install a file, because there is two programs, the file will be installed twice. Now you have three programs, the file will be installed three times. And then when I try to build the packages, I got a lot of law of things like this, which is very ugly in my output. I don't like to see that. So if someone want to fix it, it would be great. Yes? So I just wanted to let you know that the most stuff that you allow the four-max-stating stuff to do is to take care of that. Thank you. So the last thing is a configuration file. That's the only part where I cannot cherry pick which configuration file goes where, because we have populated all the ETC directory in our place. We are doing very strange magic over there, black magic, I would say. And it's really not made to make easy the life of creating packages and deciding this goes that place, this goes that place, it goes that place. So I think we really should give a new approach on how to do a configuration file and think about it. Yes? So this relates to the point that you had earlier about the main-reported of the property. If you do that, you have some kind of manifest knowledge that you would maintain and it's called the other way. Yeah? I don't want to maintain them. That might be the way to go. I wanted something for you to manage as much as possible, but yes, sometimes it's not doable and we have probably. You're right, the basic problem is the stuff to deal with the ETC is actually technically an overflow of the release stuff and the list building mechanism is not the world mechanism, so it had a different goal. Yeah, actually. Why are you having trouble with it? Well, actually, I think that most of the entries in this directory should not be there. They are in that place just because of merge master, TCE update, or a way when you do install word to merge things. Because if I'm, for example, having a configuration for the day DMA binary, we install the main page from the DMA directory where we build DMA, why not installing the configuration file from this place? It would make more sense that each code goes with its configuration file, except that in that case, the old install world way of doing things will not work. So maybe there is something to study in that area and there is a couple of issues. But if you can think about it and provide some idea, I would be really grateful. OK, so from the end user point of view, what will happen? If you want to upgrade your system to the latest security updates or to the next step of the branch you're following, you just do package upgrade, wider a little bit. And if you see a kernel update, then you probably want to reboot. But basically, that's all you have to do. Well, there is a couple of things here we have to discuss on how to do. And the upgrade of the kernel can be a bit tricky. Some people expect different things. So nothing has been decided yet on how exactly we will handle that. And there may probably kind of brainstorm on the main list to get something which fits with most of people. But for the regular people, it will be just package upgrade, reboot your under later system. If you want to create a free disk image because you want to create an USB stick or you want to create an image to go into a VM, then you just have your package already available, all those from FreeBSD. And then you create your directory, cross install your FreeBSD in there, run makefs, run makeimage if you want to do virtual machine, whatever. And you have your system installed in a couple of comments. And if you want to create an ARM disk, so this is exactly what I'm doing to bootstrap the first time my Pondable for example. I build my packages for ARMv6 because my directory will be empty, package will not able to figure out that the packages will have to be for this API. So I specify it manually. This is only needed for the first time after that you don't need anymore because package automatically figure out you're doing something for ARMv6 and install FreeBSD minimal. I have FreeBSD minimal, I have my kernel, I have all the stuff on it. I can just humanize my, well in that case, I create the image. I did the image into the place where I want and then I can boot on it. It works pretty much out of box. And if I want to upgrade my image, that's what I was saying earlier, I just remove the SD card, put it in my laptop, upgrade, and put it back into the machine. Yes? So is the intent that all type package upgrade and that will deal with the base system and packages like from ports? No, because the ports does not yet know how to properly do cross installation. And there will be a lot of work to be able to have the ports doing cross installation properly. But like the first one. Oh, you do the first one, you'll have everything at once. OK, so will the policy be to automatically update from 11.1 to 11.2 without asking or why it will ask? That is not defining that more policy for RE. I mean, that will depend on how we do provide the repository in the system. If we point, if we use the default repository, it's named 11.1, then you'll stay on 11.1, then you'll have a manual switch to go to 11.2. But if you are just populating the 11 thing. It'll depend on the package crunch, which repository you point to. Exactly. If you point it to the 11.1 repository, you'll get the security updates for 11.1, and it will say that. If you want to switch it to a different repository, then you'll get the upgrade. If you're tracking 11 stable, it'll just continue to move forwards. I assume I'm thinking the same thing. Yeah. But you'll have the choice because it'll be the package.com. And if you want to only upgrade the base system and not upgrade the packages at the same time, the package from process at the same time, you can specify here dash R3BSD base, which would be the name of the repository. So by default, it takes the two repositories. That would be two different repositories. But if you just want to upgrade the base system, just dash R3BSD base. Yes? So for instance, I think it would be quite handy to have whether or not to make the default have the ability to have a repository that represents the lifetime of 11. So that by default, it should be easy to, if I want to, roll from 11.1 to 2. But especially now that we've changed it. You'll probably have, I don't know yet, this needs to be discussed with reason engineering, cluster admin, and every people on the team. But I think that the way we'll go, we will have one directory per release, and probably one sim link, something like that, to the latest of a different branch so that when it goes from 11.0 to 11.1, you automatically switch. Because your configuration is set to always follow the latest release. So in my opinion, the way we will handle that. Having an additional command will make it very, very specific to how we do that in FreeBSD. Meaning that if one day we want to change, that means we first need to modify the code. And package is mainly created for FreeBSD, but it's supposed to work on everything. And so in that case, I don't like much idea of doing that. In my opinion, it's more a configuration thing. It's basically the same thing we're doing. Reports now, we ship in the default package.comp as the latest directory. And you have the choice to switch that from latest to forward-waiting, for example, same defaults are hard to do. So what I forgot to mention as well is from a developer point of view, that automatically makes us get rid of obsolete files, option obsolete files. This is by design or handled by the package system. So you don't have any more to track all of those things at this place, which would be nice. It also allows us to not have the huge entry extracting all the time. And depending on the option, I need a directory or not. But this is, for example, the directory is there or not, is something tricky right now. So all of this will automatically get removed. So I'm thinking about it, but I'm not working on that right now. Probably I'll propose that once we have finished that, is to hack into install world. So install world will become something which will create the packages directly. Well, we directly sync the packages into the system like we do in the POSRI. We have the package register, which is able to take a staging area. And instead of creating a package, temporarily directly install that. So when you do install world, you automatically have the set installed properly, respecting your options you used to build. And everything is seen as a package. So you already asked a couple of questions, but now it's time for a question. Yes? When can I use this on a free music cluster? Everything is on the project release package branch. Right now, world is almost done. So what we need to do is do the work of cherry picking, deciding which package goes where. But the main design is OK. It's already done. It's not entirely usable. I have a usable version on my machine, but I'm pretty sure that no one will like the way I have split the packages. So we will cherry pick more and more files. Please come join and say, OK, I'm the maintainer of this part. I want to be able to provide my own upgrades before getting into my own package, whatever, get involved, go into that tree, look at how it works, and please help. Yes? So in fact, how can I install the package and I want to upgrade that to a level that's the best you can provide? We have to determine that yet. That kind of detail is still written on here yet. You'll probably do a package install dash F for force of the package. It will just overwrite all you have. And then you'll have probably a script which you'll done to clean up, which is not anymore in the base system. I don't know yet. Sorry, some details. But yes, we need to figure out that. It's not yet in the plan. Right now, the plan was to have the technical basement to do that. And now we can think about the different policies, the upgrade pass, and what we will provide. Maybe we will just say to user, OK, to go to a level and reinstall everything, because it's too complicated. Or I don't know yet. This is fresh new. Why not scrape a package from the install system, create a manifest of things that we want to actually take over and manage, and create a fake package based on what's currently there? That could be an option. Once we have the very first support, then it's going to be possible to set up a optional repo for 10 so that the people can start. Well, merging that into 10 will be kind of tricky. It's doable, but it will require some work. Let's get one of them right first before we bring in everyone. Yeah. But I heard that some vendors are very interesting into that, so is there willing to do the job of the merging? We'll be happy with that. Glenn. On behalf of the release engineering, I want to dedicate that slide to you. Thank you.
Use pkg(8) to distribute, install and upgrade the FreeBSD base system. This talk will describe why packaging the base system, and what is/was need to be done to allow packaging the base system: - Prerequisite changes made in pkg(8) to allow handling the base particularities - Prerequisite changes made or needed in base build system to be able to create sane packages - Granularity of the packaging - Plans to satisfy most of our users: embedded who wants small packages, old timers who wants big fat packages, administrators who wants flexibility, developers who wants to be able to provides custom packages for large testings and all others. - What new possibilities/features will packaging base offer to users.
10.5446/18654 (DOI)
So you have an example where you have package equals v highs. Yeah. Do you have any fear about how to, and you have automatic feeling, so do you figure out how you're going to tag specific files for docs and specific files for configuring something? So automatically, we cannot. But you have, oh, yes, I'm sorry, I forgot. If you want to do, this is here. So I just saw how you cherry pick into a new package and create a package by this way. You have also something which is named tags. So if you do tags config on a given file, then you have, it will be tag as config. If you want to tag it docs, but docs, we will not split it because docs is only a user share docs. So it doesn't fit here. But if you want to create new tags, because tomorrow we want to do more fine-grained things, or I don't know what you want to do with those tags, then there is a tag macro that goes there and it's automatically handled by the bsd.something.mk. Can you specify what the file you're tagging is? When you're using the files macro, so bsd.files.mk, you have a files group. Already, right now, you have files groups. You have the list of files that goes in that group. And now I added a files tags and files package so that you cherry pick. You say, OK, everything in that group goes in that place, in that package, with those tags. So the only thing I cannot do yet is saying that this only specific file is going to be a configuration file. So what I do right now is I create a new group in that case for those that should be merged. Install in the same place, so same destination directory on the target, but a new group. So maybe it could be better, but at least it works. Peter? It seems to me that we can make this a lot more robust and complete by basically having the build target specifically be for staging and then basically require a package or some other mechanism to take from the stage area into an install system. Like, for example, you were talking about the configuration if you actually installed the DMA configuration files with DMA itself. That's what I was proposing before. That keeps, that simplifies the complexity of our build system. But it basically requires that we use a staging mechanism of some sort, be it package or some other mechanism that somebody else may want to use. We use the staging mechanism already. So that's. I know we use reports, but we were talking about doing that for, like, at the moment, the usual developer interface is, yeah, stage work. But what I was saying is the old world mechanism of installing directly into a live system goes away because in order to keep our own sanity, we put movable configuration files so that the sector build goes away. And we then sort of filter that through a stage area. That was what I was proposing, saying that we can use package register. And package register is able to take. If we're going to do this, we should go all the way. So the mechanism is from world through staging to either package or some other proprietary system or way of doing things. Yeah. And in the meta mode, if I'm not mistaking, you build directly into the staging area. So if we mix both meta mode and what we do that, we reduce a lot the overhead of a staging area because everything is directly built in the right place. So we have the staging by default. Any other questions? I was expecting more backshitting. Without those version numbers. Yes? What is an email? Right now, we have Mergemaster and the CFD for Mergemaster configuration changes. And they both have features that are nice. Mergemaster, if a three-way merge fails with a quad click, you can resolve it interactively. And if the CFD provides proposed information about what best changed, and you can use the CFD to see what you have touched as part of the system that will be installed, is there going to be some sort of a proposed interactive auditing that? There won't be the interactive part. I mean, if we merge, I don't like interactivity in the package system because you might want to run that through a puppet, Ansible, whatever. Well, we could have an option for that, but in my opinion, it's not a good thing to do. So the idea is to add, we discussed yesterday, for example, on the idea to have the package diff. This will happen, a package resolve that will not resolve, but we don't have yet a good name, but something that will allow you to just show you the baseline version of the file. And so you could see the diff. You could see things like this. But if you want to do manual, something is not able to merge automatically, and you'll get into some interactivity. What we do is just install. If we're not 100% sure the merge is OK, we just install the file along with the previous one, with the.new, and you'll use your favorite tool to do the merging, VimDiff, whatever. Provide a hook for an external merge tool and mechanism. That could be also an option. Yeah, I haven't thought about the hook for external. Could do that. But MergeMaster is fairly heavily tied to having user source present as well. So if the point of this is that it makes installing from package is the first class thing, you don't actually need user source in that kind of merge. MergeMaster a bit. You have to figure out that there is not that many files that really needs to be merged. I mean, I've been reviewing most of the things in the ETC. And out of my memory, I think it should be around 10 files that really needs to be merged. Something like this, maybe more, but not that much. Because most of the time, we have the same default into default. So if the file is there, it's that you have modified it, and we don't have ourselves to install one in that place. Wait, MergeMaster does three-word merge? It's up all? No, it's not three-word merge. ETC update does three-word merge. I know. Yeah, we use it in the cluster. It's an absolute godsend. But yeah, MergeMaster, I would share an idea that MergeMaster died as a result of this. Yes? I can see it when you're saying about the same default, but I still wonder about the case of where you have the same default saved somewhere, and then you have your own config, and then something in an update either is likely to change whatever you're setting to crazy or whatever you're setting to something that changes. And maybe this is just watching what we do when we write the code. But that's a case that I don't know if you really covered you get what I'm saying by that. Where the user setting is wrong because of the upgrade, and even though you may have a default. You mean you're modifying a file you're not supposed to modify? I'm saying that you have something installed, you have same default, and then you have whatever the user set. Yeah. And then if the upgrade of the software makes what the user set irrelevant or makes it, well, that's a lot of issues there. Yeah, but that's nothing we can really do about it, except having an updating message because right now we have updating. And updating is very cool thing if you're building yourself. And this kind of thing saying, oh, the default is changing if you have something already set, it will be irrelevant with a new version. This is an updating, and if you're going through package, then it's useless for you because you will never read it given it's in the sources. So the idea would be to provide the messages inside the package. So when you do package upgrade, at the end you have the list, and it says, OK, your let's say DMA configuration is, if you have already configuration for DMA, the behavior has changed. You might want to review it again. But I can't see how to do that to explain more than a message to do is that. That perfectly covers my case, and that just wasn't set. So that perfectly covers my case. So basically that's updating, but for the binary packages. Yes? I may have missed it. What happens with shared libraries when there is a version about the main one that has the previous version of the list? Yeah, so that's something even for ports, we need to handle, and we don't. So in one thing on my to-do list, I have a flag saying, I want to keep somewhere which will be libcompat, but somewhere the old version in case of bumps. So this is not done yet into package, but it will happen before we officially ship packages like this. Because I think it's important, in particular, when you have grading from a version to a newer version. Yeah, I mean, we need to break the different dependencies. Yeah. So, by the way, yeah. Yeah. So you're trying to make these ten files conflict-fogged, but users do tend to modify files that they shouldn't modify, like the hand. They be aware that they are not. I think it's not content. I don't think my modify ones really don't need to. So if the find-match is a little bit, so when one considers if the file yourselves could be old-checked under the match, but it should, then why get the switching and just set a file and the pattern and let them handle it and alert them and change that up? Well, actually, you'll get a warning for sure. You'll get a warning saying that this file was different by any way. I overwrite it. Right now, that's how you will have something. Package check minus S for now. What copy of the over-check? I mean, you have to do package check minus S. Yes, that's something we could probably do. In my opinion, what we should do is rethink the way we do the default configuration file and think everything which should not be modified by the user should not be probably in ETC, and that should be elsewhere. Well, that's a very long-term project if one want to go there, but the idea is to hide from the user everything which is not supposed to modify. And if we want to allow him to modify overwrite, then we need the software to be able to say, OK, this is my default. This is a place for the user configuration file. If the user want to modify, you will create ETC, I don't know which services. And the default will always be upgraded in user, share, whatever. My opinion, ETC should be almost empty from everything in the system which is going to be modified and should be kept tracking the packages. But still, yes, we could provide an option that allows to keep a copy of an old previous file if it has been modified from the one that works. Yes. I've kind of a slightly different answer. So think about it the way we have the average system for the forest collection. We shipped it to default at zpackage3ds.com. And that defaults to the upstream repository. If you want to use your own repository, then you can create the user local at zpackage from my repository.com. I think this might actually clarify a little bit more as to your question really after the very recent release. What I think is if you want to translate, so the idea isn't going to be if you're running 11.0, you're not necessarily going to operate automatically to 11.1. It's the so when we have the release, the release constant says that binary upgrade is going to be done for the cmd-r and whatever the files are. I'm thinking it might be a good idea to do something like that. But if you don't want to accidentally, so what I'm thinking is if we shift with the configuration to say this is previously 11.0, so track 11.0, and tell 11.0, so a security rank basically. It won't automatically get upgraded to 11.1, unless you specifically tell us to do that. I have a question on that particular subject. A question on that particular subject. Could we perhaps do a repository message or something like that so that if people who are on the 10.0 repository and 11.1 comes out, we could stick a message. By the way, 11.1 is out in the ad here. So we have metadata on the repository. I haven't checked what we have exactly in it. I think Brian or Seth Cardin was better than I do this part. But the metadata was supposed to be able to define an end of line. So we can say a message to the user. This is going to be end of line. Is that just a treat to this or this one? Yeah. It's not fully implemented yet, but it'll be easy to add. Yeah. It seems to me that that would be a really good thing to make sure that we had under control, because we could actually put a message on a repository that's Yeah, like for freebies, you update when you say that, OK, you won't have any update after this or after this. Yeah. That gives a choice to the user and makes them aware of it. So when they type package upgrade, they'll get a message saying, oh, by the way, you're on the repository that's got three months of build, so that's good or something like that. OK. Thank you very much. Thank you. Thank you.
Use pkg(8) to distribute, install and upgrade the FreeBSD base system. This talk will describe why packaging the base system, and what is/was need to be done to allow packaging the base system: - Prerequisite changes made in pkg(8) to allow handling the base particularities - Prerequisite changes made or needed in base build system to be able to create sane packages - Granularity of the packaging - Plans to satisfy most of our users: embedded who wants small packages, old timers who wants big fat packages, administrators who wants flexibility, developers who wants to be able to provides custom packages for large testings and all others. - What new possibilities/features will packaging base offer to users.
10.5446/18652 (DOI)
It's not changing. Oh, there we go. Okay. All right. Good afternoon, everyone. My name is John Criswell. I'm an assistant professor at the University of Rochester. And today I will be talking about protecting free BSD with the secure virtual architecture. This is work that I did when I was a PhD student at the University of Illinois at Urbana-Champaign. And the work I'm going to be talking about today was joint work with Nathan Dutton-Hahn and our advisor, Vikram Advay. Now, my dissertation work starts with a very simple question. Do you trust your operating system kernel? Now, here's the interesting thing. You say no. And yes, some people say yes, and some people say no. But in fact, in practice, we do. And the reason why is because on top of our commodity operating system kernels, such as the free BSD kernel, we run applications. And we use these applications to process sensitive information. So we may buy things online using our credit cards from websites like eBay or Amazon.com. We may file our taxes, which includes sensitive information, such as if you're an American citizen, social security number, and your address. Medical data. My doctor happens to store my medical information on a Windows machine. And certain voting machines run commodity operating system kernels or operating system kernels based on them, such as Windows CE. Now, we run these applications on our commodity operating system kernels. But maybe this is not such a good idea. And the reason why is because commodity operating system kernels are vulnerable. Now, they're vulnerable for two reasons. First, all commodity operating system kernels today are written using C and C++. And as a result, they suffer from the same security vulnerabilities that applications suffer when there are coding mistakes that are made. So for example, commodity operating system kernels have been vulnerable to buffer overflows and integer overflow attacks. Additionally, there are vulnerabilities that occur in operating system kernels due to their job as the operating system kernel. So for example, the operating system kernel provides the process abstraction, which is supposed to isolate different processes, different programs from each other. But sometimes logical bugs in the operating system kernel allow information to leak from one process to another unintentionally. So for example, through misconfiguration of the MMU, one process may be able to access the memory of another process. Finally, all commodity operating system kernels to date are dynamically extendible. You can load new drivers, load new kernel modules into the system as it is running. Now because of how the operating system kernel is structured, these new modules, when they're loaded, can modify the operating system kernel behavior in arbitrary ways. And so attackers have taken advantage of this. They've written kernel level malware, typically to hide their presence, such as hiding processes and files and network connections. But because they can use any arbitrary behavior that they want, they can do things like stealing data from applications, corrupting data within applications, even modifying application control flow. Now here's the real kicker. If the operating system kernel is exploited, then all security guarantees on your system are null and void. If your operating system kernel is compromised, you have no security on your system. And the reason why is because nearly all security policies are either enforced by the operating system kernel itself. So if the attacker controls the operating system kernel, he or she can just turn the security policy off. Or they are enforced by applications running on top of that operating system kernel. And so because monolithic operating system kernels have such a great amount of control over the system, they can just reach into the application, change its code, change its data, and therefore turn off the security policy enforcement within the application. So if the operating system goes, everything goes with it. Now there are two approaches to addressing this problem. If you don't want to abandon existing commodity operating system kernel. So if you don't want to rewrite it, if you don't want to do massive restructuring and refactoring of the kernel. The first is to automatically harden the operating system kernel from certain classes of attack. And I've looked at this in my PhD dissertation and built systems such as SVAM, which enforces strong memory safety guarantees on commodity operating system kernels, namely Linux. And then more recently, the coffee system, which enforces a lighter weight security policy called control flow integrity on the free BSD kernel. Now, while these approaches are good, one limitation that they have is they only address a particular class of attacks. So the SVAM and coffee systems, for example, address buffer overflow attacks and attacks of that nature. But they don't address things like kernel level malware that's loaded into the operating system kernel. They don't handle information leaks. They don't handle missing access control checks. So another approach that I explored in my dissertation is to just assume that the operating system kernel becomes compromised. Just raise up your hands and say, OK, we're just going to assume that the operating system kernel can be compromised. Can we run applications securely on that kernel anyway? And so we built a system called VirtualGhost, which provides data confidentiality and integrity for applications running on a potentially compromised operating system kernel. Now, today in this talk, I'm going to talk about a little bit about coffee and VirtualGhost. Most of the talk will be about VirtualGhost, because in fact, VirtualGhost is actually built on top of the coffee system. So we have control flow integrity for the free BSD kernel, and then we're going to build VirtualGhost on top of that. And so the contributions of VirtualGhost is that it protects application data confidentiality and integrity, as well as protecting other features of the application. It uses compiler techniques. And because it uses compiler techniques, we can use the same processor privilege level as the operating system kernel. We do not need to use a hypervisor based approaches, if you will, where we kick the operating system kernel up into ring one on x86, or have VMM extensions that allow us to run code below the operating system kernel. We can actually run alongside the operating system kernel. And it turns out that although we're using compiler instrumentation techniques to add instructions to the operating system kernel code, we are still faster than hypervisor based approaches. So that's a brief overview of the research work. I'll now talk about the design of VirtualGhost, and then take a small aside and talk about some of the design and implementation of coffee. I'll then talk about our experimental results. So a little bit about the implementation and the experiments that we run on coffee and VirtualGhost. And then finally, I'll conclude with some future work that we're doing at the University of Rochester. All right, what is the fundamental problem with current system design? The fundamental problem is that applications cannot protect themselves from the operating system kernel. So if I want to write an application that does not trust the operating system kernel, I will be inclined to write an application that just keeps its data encrypted as long as possible. So encrypt data, when it sends it to the file system or to the network, it may even keep data encrypted while it's stored in memory. The problem is that even if the application does this, it cannot protect itself from the operating system kernel. The reason why is the operating system kernel can access anything and everything on the system. So the operating system kernel can just reach in and read unencrypted data out of the application or modify the encryption keys to some value that the operating system likes. The operating system kernel can modify the applications code so that it just doesn't encrypt anything at all. It can even modify the application's control flow, it can stop the application, change its program counter, and then resume the application, causing it to skip over encryption and decryption operations. So no matter how hard an application tries on an existing operating system kernel, such as FreeBSD or Linux or Mac OS X, it just simply cannot protect itself. The goal of VirtualGhost is to build a system that allows applications to protect themselves. And there are three features that such a system requires. First, applications require private data and private code. So they need, they're still gonna need public memory that the operating system kernel can read and write so that they can communicate with the operating system kernel, but they're also going to need memory that the operating system kernel cannot read and cannot corrupt. Secondly, they need incorruptible control flows. They need to know that when they start execution, they start execution in their main function. And they also know that if they're interrupted by a trap or a interrupt or they execute a system call, they have to know that the operating system kernel is not gonna be able to modify their control flow maliciously while the operating system kernel is running. Third and finally, applications that protect themselves are going to need a reliable way of getting their encryption keys from some file, typically the executable image, from the executable image into that private data memory region without the operating system kernel being involved. Because if the operating system kernel is involved in that process, it can change the encryption keys, it can read the encryption key values, thereby defeating the purpose of having the application encrypt its data when it sends it to the operating system to begin with. So these are the three things that we need. How hard could it be? Well, there's two challenges. The first challenge, the more obvious challenge I think, is that modern processor design assumes that system software like the operating system kernel should be able to access all of memory. So that's challenge number one. But there's actually a more subtle challenge when you start trying to do this. The more subtle challenge is that if we want operating system kernels to provide the features that we expect them to have, they must be able to manipulate application state. It doesn't suffice to put an application over here and the operating system kernel over here and say, never the two shall meet. Because if you did that, the operating system kernel would not be able to create new processes and new threads. It would not be able to execute new programs by providing the exact family system calls. It would not be able to provide signal handler dispatch because all of these operations modify application state. They go in and make changes to the application's program counter. So instead of preventing the operating system kernel from manipulating application state, we must allow it to do so, but we must control what it does. We must ensure that it makes good state modifications, but not bad state modifications. All right, so we're gonna need some infrastructure to do this. We're going to use from our previous research of the secure virtual architecture. So in the secure virtual architecture, instead of compiling the operating system kernel down to native code, and instead of having inline assembly code written into the operating system kernel, instead of what we're going to do is we're going to compile the operating system kernel to a virtual instruction set. And we're going to design this virtual instruction set to be easy to analyze an instrument. Now, handwritten assembly code is not easy to analyze an instrument. So what we're going to do is we're going to port the operating system kernel to our virtual instruction set. We're going to have a set of instructions in that virtual instruction set called SVOS, which we can use to replace inline assembly code. The SVOS instructions basically provide features such as registering, interrupt, and trap handlers, being able to configure the MNU, being able to manipulate interrupted application or interrupted program state. In this way, we could represent an entire operating system kernel in the virtual instruction set. The operating system kernel will have no inline assembly code, no handwritten assembly code in it at all. Now, you can't actually run a virtual instruction set operating system kernel. You have to translate it to native code, like x86 or ARM or PowerPC or MIPS, in order to run it on a real processor. Now, when I give this talk, some people tend to assume that, okay, you're kind of doing something like Java. So you have a virtual instruction set, you have a native instruction set. So the translation must happen just in time, just like Java does. No. Secure Virtual Architecture is designed so that translation can happen anytime you want. It can happen ahead of time, it can happen at system boot time, system install time, run time, idle time. In our implementation, in our prototype, we implement translation ahead of time, because A, it's easier to do, and B, it's more efficient. Now, let's look at the virtual instruction set in a little bit more detail. It's comprised of two components. The first component is SVA Core. SVA Core is taken from the LVM Intermediate Representation. So it's the language that the LVM compiler uses to analyze and optimize code. It has source level types, it has explicit static single assignment form. These are things that allow us to do sophisticated compiler analysis and instrumentation on the operating system kernel code. Now, the SVA Core instruction set is what you would call regular computation. So it provides things like adding and subtracting, pointer arithmetic, reading from and writing to memory, those sorts of things. Now, the LVM IR, if you take away the inline assembly code feature, can't support an operating system kernel. You can't express the FreeBSD kernel or the Linux kernel in LVM IR. So we extended it with a new set of instructions called SVOS. These are operating system neutral instructions. We've used them with both Linux and FreeBSD. And they encapsulate the state manipulation and hardware configuration operations. So again, they provide instructions like configuring page table pages, modifying interruptive program state for signal handler dispatch, registering system call handlers and trap handlers and things like that. Now, what's interesting about that, not only do they encapsulate these operations, but because the operating system kernel has to use these instructions to interface with the hardware and to manipulate state at all, it allows us to control how the operating system kernel does these things. So we can control how the operating system kernel configures the MNU. We can control how it manipulates application state. And so what we do is when we implement these instructions, these SVOS instructions, we add runtime checks to them to help enforce the security policy that we're also enforcing with the compiler instrumentation. So one implementation of the SVOS instructions helps enforce control flow integrity for the coffee system. Another implementation implements the runtime checks that we need for virtual ghosts to protect applications from the operating system kernel. Now implementation wise, what we do is we implement the SVOS instructions as a native code runtime library. So essentially what happens is you express your operating system kernel in the virtual instruction set. You then analyze an instrumentate, convert it down to native code. The implementation of the SVOS instructions is missing from that native code. So the runtime libraries linked in provides the implementation of those instructions. You now have a complete native code kernel that you can boot on real hardware. All right, now how are we going to use secure virtual architecture to implement virtual ghosts? How are we going to provide those three features that we need? Well let's first look at private data and private code. So most operating system kernels such as FreeBSD divide the virtual address space into two partitions, the user space partition and the kernel partition. So user space memory is where the application lives, kernel space memory is where the kernel lives, and the kernel is allowed to access both user space and kernel space, whereas applications can only access user space. Virtual ghost is going to add two new partitions to the virtual address space. The first one is ghost memory. Ghost memory is memory that the application is allowed to read and write, but the operating system kernel is not allowed to read and write. So this is where the application is going to put its private data and its private code. Now what's going to happen is that implementation of the SVOS instructions, that runtime library, it has its own data structures that it uses in implementing its runtime checks. Those data structures should not be accessible to applications or the operating system kernel. So there's another region of memory called the virtual ghost VM memory or VM memory for short. This is where the SVOS data structures go. This region is not readable or writeable by applications or the operating system kernel. So essentially we provide this new, the two new regions, contiguously in memory, that the operating system kernel is not allowed to read or write. Now by this time you're probably asking me, John, how do you keep an operating system kernel from writing into ghost memory and VM memory? The trick to the secret sauce is solve for fault isolation instrumentation. So what we do when we're translating from virtual instructions set code to native code, is we look for all the load and store instructions in the operating system kernel and we add some instructions before them. What these instructions do is they check to see whether the pointer that's gonna be used in the loader store is pointing into user space memory or kernel memory. If they are, it's fine. The pointer does not need to be changed. If they are erroneously, if the pointer is erroneously pointing into ghost memory or into the virtual ghost VM memory, then a simple bit masking operation moves the pointer into kernel memory. In this way, we are guaranteed that all loads and stores are either accessing user space memory or kernel memory, but not ghost memory and not VM memory. Now, let's say your operating system kernel unfortunately has a buffer overflow in it. An attacker could potentially use that buffer overflow to change the control flow to jump over these new instructions that we've added. And that would allow the operating system kernel to access ghost memory or VM memory. And that would be bad. That would violate the virtual ghost security properties that we're trying to enforce. So what we do is we use control flow integrity. By using control flow integrity instrumentation along with software fault isolation instrumentation, we can ensure that the software fault isolation instructions are always executed. They can never be jumped over even if your kernel has a buffer overflow or other sort of memory safety error. In addition, the control flow integrity instrumentation helps protect the operating system kernel from buffer overflows and other related attacks. So we sort of get a two for one deal with virtual ghost in that we can actually protect the operating system kernel from attack and protect applications from the operating system kernel with one set of instrumentation. All right, so now we have our private data and our private code. What about secure application control flow? Well, why isn't application control flow secure today? The reason why is because on an interrupt trap or system call, the hardware transfers control flow to the operating system kernel and the operating system kernel saves the interrupted program state on the kernel stack. The kernel stack is in kernel memory, it's in kernel space, the operating system kernel can just read and write that as it likes. So in virtual ghost, what we do is when there's an interrupt trap or system call, the hardware transfers control flow to the virtual ghost runtime library, that implementation of the SVOS instructions. That runtime library then saves the interrupted program state not into kernel memory, but into the virtual ghost VM memory. Then virtual ghost transfers control flow to the operating system kernel. So now the operating system kernel can respond to the interrupt or the trap or the system call as appropriate. But now when the operating system kernel wants to modify application state, it can't do so directly. It's sitting in that virtual ghost VM memory where the operating system kernel can't touch it. So the operating system kernel wants to make a change to say the program counter or the stack pointer, it has to ask virtual ghost to do it through an SVOS instruction. So in this way, the SVOS instructions can vet changes to save program state. And if the changes are okay, then go ahead and make those changes on behalf of the operating system kernel. So in this way, we can actually control operations such as signal handler dispatch, thread and process creation and the exec family of system calls. As an example, let's look at how the exec system call is implemented on a virtual system. So you have the operating system kernel, there's some application executable that it wants to execute. There's some program that has executed the exec system call, so it's been interrupted and we are now running in the kernel code. So the kernel says, hey, virtual ghost, I have this application executable here, please set up the code segment for this executable and please change the save program state so that when I resume it, when I put it back onto the CPU, it will start up in the main function of this executable. So what virtual ghost does, it says, okay, I will go and I will set up the application code segment in the applications ghost memory. So now we have the applications code segment, it's in ghost memory, the operating system kernel cannot arbitrarily modify the applications code, but the application can use it. And then virtual ghost will locate the virtual address of the main function and change the program counter to point to it. And then it returns back to the operating system kernel. So when the operating system kernel does the return from system call, what will happen is the save program state will start executing this new application code in its main function. So what we've done is we've essentially taken the operating system out of the path, the critical path, where we rely on it to do the correct thing. Instead, virtual ghost does the critical operations of setting up the code segment and changing the program counter to point to the main function. All right, so now we have private data and private code, we have secure application control flow. The last thing we need is secure application encryption keys. So let's say you're an application developer, you've written an application, you generate a public-private key pair for that installation of the application, and then you're gonna run this on a virtual ghost system. So if you just embed the application code and the application key into the executable file, and then you send that over to a virtual ghost system, your operating system kernel can do one of two nasty things. The first thing it can do, it can simply say, well, I like the code, but I don't like that encryption key because I don't know what it is. So what I'm going to do is I'm simply going to replace it with my own public-private key pair. So when this application goes and encrypts data, I know what key it's using. Alternatively, the operating system kernel may say, well, I like that application public-private key pair, this application came with some files that are encrypted using those keys, but I wanna be able to see what's in those files. So I'm going to change the code in the application, so that it'll decrypt the data with that application key pair, and then I can see it. What we need to do is we need to tie the application code and the application key pair together, so that if they are tampered with by the operating system kernel, we can detect the tampering and refuse to run the program. So every installation of virtual ghost will have a public and private key. The virtual ghost public key will be used to encrypt the application code, the application key, and then a checksum of the combined application code and application key pair will be encrypted with a virtual ghost public key. What this allows us to do is when the operating system kernel comes along and says, hey, virtual ghost, please set up the code segment for this application, virtual ghost can verify that the code has not been modified and the key has not been modified, and then in fact, this code and this key actually go together. It's the right code and the right key paired together. If it hasn't been tampered, then what virtual ghost will do is it'll set up the code segment as we talked about a few slides ago, and then it'll decrypt the application key pair and put that into ghost memory. In this way, the operating system kernel is never involved in getting that application key pair out of the executable and into the process as ghost memory. It's not involved in that process, and so as a result, it cannot corrupt that process. Yes? If we are just sending encrypted to virtual ghost, the application code is to be paired with a hash, but if the hash is assigned with the same key pair, then can we mix and match to the text less, send data and then hold back and send previous code or something? It does the code that's sent, and then the second message, we have some other key pair that's sent both or encrypted to virtual ghost, but then when we receive some other code and key pair later, we could send it or do out of order. I'm not sure if I quite understood. Are you essentially saying if I'm an application developer and I create one version of my application and I ship it, is it possible that I can send another version of the application? Yeah, I can send, I can say, all right, let's use the old version of the application, which I have to do, I guess some of the apps will have to do that. Right, right, okay. So I think my answer to that is that if the system that the developer is using is running on virtual ghost, or if it's a system outside, so it's like an app store from Apple or something, then that should not be a problem, because the operating system kernel can't get in and modify that. It shouldn't be able to corrupt the process and create the second version of the application, correct? So although I think there is an attack though, where the operating system kernel also actually, so I think what you'd probably want to do is I think what you'd want to do is every time you update an application, I think you would actually have to give it a new application key pair, and then re-encrypt all of the files with that application key pair. So yeah, I had not thought of that, thank you. Any other questions? Yes? So I have a program, it's in my brain. I am running low on available memory, I want to swap it out. So this segment of code comes from this user space VM, goes to disk. Now this program is executing again, I wish to put it back in place. At this point, the kernel will own the buffer that has just been taken from disk, which has the application code on it. I modify that code to be something that reads from the secret memory, which I can do, because that will be executed from the context of the other process. I then swap that back in, allow it to continue executing without changing its construction point or whatever, but then I still have arbitrary code execution from the context of the process, I use that to scale the keys and I go create it. I don't see any way to protect against that without having the entire everything, it doesn't work. So let me rephrase your question, you're asking how to secure swapping work, right? So as it stands right now, you can't swap out anything in ghost memory, right? The operating system cannot swap out anything in ghost memory, because it can't read it and it can't write to it, right? So first you said, okay, let's say you swap out the code segment or the application key pair, right? Well the operating system can't do that because it's in ghost memory. Now in the paper we talk about a design for allowing secure swapping, so if the operating system kernel wants to swap out memory that happens to be mapped into ghost memory, it can do so and what virtual ghost will essentially do is essentially encrypt and digitally sign the contents and then allow the operating system kernel to have access to that. We haven't implemented that because we haven't run into a case where we need to actually swap ghost memory and things like that out. I don't need to swap ghost memory, I can swap any memory that's in that process. So, executable memory should be in ghost memory, because if it's not, yes. Because otherwise the operating system kernel can corrupt it. Okay, yep. Yes. Do you account for modifications that might happen by directly accessing? Yes, so we haven't implemented it, but that's easily solvable through IO MMUs. So you just get, so you have virtual ghost control access to the IO MMU and the operating system kernel cannot reconfigure the IO MMU, so the virtual ghost runtime, the implementation of the SVS instructions, basically configures the IO MMU so that the operating system kernel can't use DMA to get information into and out of physical memory that's mapped into the ghost memory region. What I don't really understand about this, but you can sort of, we still affect the behavioral system called the process and both names and also modified it, trap frame. You can't modify the trap frame. Because, so the trap frame, we call it interrupted program state. That's saved into the VM memory, which the operating system kernel cannot access. So, or at least it can't access it without going through virtual ghost and virtual ghost. The interface of virtual ghost combined with this runtime checks ensure that the changes are not going to violate the application's control flow integrity. So what if I would just, you know, assume that the program is both the MNAP system call, I just give it back a buffer that's not within the ghost. Excellent, excellent. So you have just reinvented the IAGO attacks. So there's an attack called the IAGO attack in which since the operating system kernel can't access the application memory anymore and it can't corrupt the control flow, it just says I'm going to return bogus values from system calls. And when I do that, maybe I can trick the application into doing something it doesn't want to do. So maybe I return, MNAP returns a pointer into ghost memory and the application just assumes that's okay. So in future work, I'm actually having a student work on that. So I will get to that in basically my last slide. But yes. Yes, just out of curiosity, you said that you created the protected ghost message by instrumenting at compile time all the old memory attack. Yes. But are you even able to catch the kernel trying to patch itself to create a new memory attack to the front end? That's because we don't allow the kernel code segment to be writable, which I'll get to in a few slides. Okay. But the second thing is the price of the bit, so if you want to use it, okay. Yeah, so in our useNIC security 2009 paper, we have a solution for the limited amount of patching that Linux actually tries to do. So, but in general, the operating system kernel is not allowed to make arbitrary changes to its native code segment. Yes. And that has to hold your previous line as well. I wouldn't be able to play that protective of the previously thought-brained thing for product that. Actually, so I haven't really thought about that. I think user space is actually fine, and we'll get to the reason why in a minute. All right. All right, I'm gonna move on, and we're gonna go to coffee break. All right, so coffee is essentially a subset of virtual ghost. So coffee divides the virtual adder space into three regions. It's got the user space region, the kernel region, and then the coffee VM memory region. So this is the virtual ghost. VM memory just renamed as coffee VM memory. Coffee provides control flow integrity so that we know where we're jumping to in the kernel code segment. It gives us code segmentary, so the operating system kernel code does not change. It gives us software fall isolation on stores. So, unlike virtual ghost, which has to maintain the confidentiality and integrity of VM memory, coffee only needs to protect the integrity of its VM memory. So it only has to do software fall isolation and documentation on stores. The SVOS checks that its SVOS runtime library does, only do control flow integrity and code segment integrity, and it doesn't have the encryption key delivery feature or other virtual ghost specific features. So basically, coffee is what gives us the control flow integrity and code segment integrity that virtual ghost requires, and then virtual ghost builds on top of that the features to protect applications from the operating system kernel. How does coffee work? Well, it could enforce the control flow integrity in multiple ways. In our prototype implementation, what we do is we use the approach from Zangton and Moorset in their CCS 2011 paper. So what we do is when we translate to native code at the beginning of every function, and after every call instruction, we insert a special no op instruction, a no op instruction that does not appear anywhere else in the kernel code segment. Then we instrument all computed jumps, so every return, every indirect function call, to first take the address that the kernel wants to jump to, and bit mask it so that it's not pointing into user space memory. So someone asked me, well, what about instructions in user space memory? Well, instructions in user space memory, their kernel code, the virtual ghost has to set up the code segment so that it's, I think virtual ghost has to set up the code segment so that it's executable. So I think virtual ghost still has to set up the code segment, but we allow arbitrary native code to be run in application space because we control what the kernel executes. So as long as the kernel is not tricked into running user space code, which is what the bit masking does, we're fine. So we bit mask the pointer, the address, to make sure that it's in the kernel code segment, and then what we do is we check to make sure that where we're jumping to, actually contains one of these no op labels. If it does, great, if not, then we use a direct branch to jump to some error handling code. Now for returns from interrupts, traps, and system calls, so in modern operating system kernels, the operating system kernel can actually interrupt itself, so the operating system kernel can experience a trap or an interrupt while it's running. So just like application state, we save that state into the VM memory where the operating system kernel cannot directly modify it. So in this case, we prevent accidental overriding through buffer overflows of changing the program counter of the operating system kernel once interrupted, or changing the privilege level of the operating system kernel once interrupted. We also have instructions that allow us to do the exception unwinding that's done on efficient implementations of copy in and copy out. So we can still use the MMU to catch faults in copy in and copy out operations, and securely unwind the control flow while being control flow integrity. Now as I said before, we also provide code segment integrity, so by controlling the access to the page table pages, and by controlling the MMU, both coffee and virtual ghost ensure that the code segment is never made writable, there are never any changes that map new data or new code into the code segment. There is an instruction that allows you to dynamically extend the code segment of the operating system kernel. So if you want to load a say a device driver, you can give virtual ghost or coffee the virtual instruction set code, it'll translate that down to native code, do the instrumentation, and then add that to the kernel's code segment. But otherwise the operating system kernel is not allowed to make arbitrary changes to its native code segment. All right, back to virtual ghost. All right, so now I'm moving on to the results. So we implemented a prototype of virtual ghost and coffee for 64-bit x86. We ported FreeBSD9 to virtual ghost, if you're wondering why it's so old, it's because we started this, I think back in 2012. So the trusted computing base is about 5,300 source lines of code, this is the size of the compiler passes that we wrote and the runtime library that we implemented. And then we modified several applications from the open SSH application suite to use ghost memory. So the SSH client, the SSH key generating program, the SSH add utility, use ghost memory for their heap. They really should be using it for globals and stack, but in our prototype implementation, we're only doing the heap. And basically what they do is they create authentication keys that are encrypted so the operating system kernel cannot read them, cannot access them, but the SSH client and the SSH add utility can use them. All right, this is now released as open source software. We released the LVM compiler extensions, the virtual ghost runtime library, and a patch to the FreeBSD9 kernel code, which gives you the port of FreeBSD9 from x86 to the secure virtual architecture virtual instruction set. This is available on my GitHub account, github.com, slash jtcriswell with two Ls. There's also a link to this from my homepage at the University of Rochester. All right, now the first experiment that we ran was we wanted to see whether we could stop a sophisticated malware attack, and specifically one that was designed with virtual ghost in mind. So what we did is we wrote a malicious kernel driver. And what this malicious kernel driver does is it tries to set up a false signal handler within the application that will try to copy data between ghost memory to traditional memory. If it can do that, then it can just read the malicious driver, it can just read the data out of traditional memory. So if it can get a mem copy instruction to act as a signal handler, then it can steal data out of ghost memory by tricking the application into copying it into traditional memory. Now this is way more sophisticated than what you need for standard FreeBSD, and standard FreeBSD there is no ghost memory, so you can just read data straight out of memory. So nevertheless, this works on native FreeBSD. It doesn't work on virtual ghost, and the reason why it doesn't work on virtual ghost is because virtual ghost is protecting the application saved program state. When the application starts running, it tells virtual ghost through a system call that goes straight into the virtual ghost runtime library, here is where all of my signal handlers are at. So when the malicious driver says, hey, virtual ghost, please change the program counter to point to this mem copy instruction, virtual ghost says, that's funny. The application didn't say that that mem copy instruction was a signal handler, so no, I'm not going to change the saved program state at all. I'm going to leave it completely unchanged. So operating system kernel, if you continue to run this application, it's gonna continue running right where it left off before the interrupt trap or system call. So that's how we stop this rather sophisticated piece of kernel malware. All right, how does virtual ghost perform in terms of execution time? So we wanted to compare virtual ghost to other approaches. Other approaches try to magically encrypt application pages when the operating system kernel tries to access them, the most recent of which is a system called intag, it uses VMM extensions in the processor. We compared our LM benchmark results to intag, and what we found is that we do quite a bit better than intag, so virtual ghost is usually about 4x to 5x overhead on system call latency, normalized to native, whereas intag is more like 7x to 9x. Also, virtual ghost has very low overhead on some key benchmarks, so for example on page faults, virtual ghost only adds 15% overhead, whereas on intag, you have 7.5x overhead. Now this is not an apples to apples comparison because intag uses Linux, we use FreeBSD, we're using different machines, we're not even using the same version of the LM benchmark suite, but it gives us a ballpark figure, and the ballpark figure says that virtual ghost is doing pretty well. Now, what about comparing virtual ghost to coffee? So what we see is this is our LM benchmark results comparing coffee to virtual ghost. Because of the additional software fall isolation instrumentation on loads, virtual ghost does incur a fair amount more overhead than coffee does. So one thing that this tells us is that the software fall isolation instrumentation actually does matter, it is actually hurting performance. So if we could get rid of it, that would be nice. All right, now that's micro benchmarks, that's latency of system calls. Most applications do not spend most of their time executing system calls, they spend most of their time doing computation. So we wanted to see what is the effect of performance on actual applications. So what we did is we took two network servers, THTPD and SSHD, and we ran performance experiments on them. We ran our experiments on an isolated one gigabit per second network, and the reason why we chose network servers is because unlike other standard benchmarks like the spec benchmark, they spend a significant amount of their time in kernel space. They spend a lot of time actually using kernel services, and therefore the overheads that we're adding to the kernel are more likely to show up. So our first experiment, we took THTPD, we used Apache Bench to transfer files between one kilobyte and one megabyte in size, configured Apache Bench to use 100 clients, 100 clients app operating parallel, doing 100,000 requests, and what we see is the performance overhead is negligible. Now I haven't shown the coffee numbers, mainly because we configured Apache Bench a little bit differently, we used I think 32 clients instead of 100,000, but the results are essentially the same, negligible overhead for THTPD. All right, that's multiple clients. What happens if what you want to do is you just want to transfer one file, transfer it over to the network as quickly as possible, and using cryptic connections so that no one changes your file or sneaks up on the contents of your file during the transfer. So what we did is we took an unmodified SSH server, so this is not using ghost memory, this is an unmodified SSH server running on a native free BSD system, the coffee system, and the virtual ghost system. We used an SCP client on another machine to transfer files between one kilobyte and one gigabyte in size, and we measured the bandwidth that we get through the verbose mode on SCP. What we find is that coffee incurs a 27% reduction in bandwidth in the worst case, whereas virtual ghost reduces the bandwidth by 45% in the worst case. So the overheads are not completely terrible, but obviously there's room for improvement. All right, now what happens if you use ghost memory? So far these experiments have basically been showing what happens to existing applications if you put them on a coffee system or a virtual ghost system. What happens if you start using ghost memory? Is there any cost to that? So we took our SSH client, which is using a wrapper library that copies data between traditional memory and ghost memory when it wants to do read and write system calls. So we took this ghosting SSH client and we ran it on a virtual ghost system and we used it to transfer files from another system between one kilobyte and one gigabyte in size, and then we also tested the original SSH client. So this is original SSH and ghost memory SSH, both running on the virtual ghost system. What we find is that there's a 5% reduction in the worst case, typically for these larger file sizes. Why is it for these larger file sizes? Well, we suspect that overhead is coming from the fact that we're copying data between traditional memory and ghost memory when we're doing reads and writes. Now here's the good news. The good news is that a lot of this copying is unnecessary because SSH is encrypting and decrypting data as it sends it to and receives it from the network. Encryption and decryption has an implicit copy operation. So what this SSH client is doing right now is when it receives encrypted data, it takes the data, the operating system kernel puts an intraditional memory, SSH copies it into ghost memory because that's what the wrapper library for the read system call does. Then it takes that encrypted data and ghost memory, decrypts it, makes another copy in ghost memory. So if we hand-tuned SSH, what we could do is the encrypted data comes into traditional memory, then it's decrypted and copied into ghost memory in one operation. So instead of having these two copies, we only have one copy. So in a nutshell, while 5% reduction isn't bad, we think we can do better. All right, future work. So one of the things that we're gonna have a student start working on soon is replacing and reducing the compiler instrumentation. So as you've seen, the compiler instrumentation, while it is better than using the VMM extensions in the way that InkTag did, it still has overhead that's not negligible, at least on the Allen Benchmark Suite and on applications like SSHD. We think we can replace the software fall isolation instrumentation using address-based identifiers or ARM domains, or perhaps only making small modifications to the MMU of a processor to provide the isolation features that we need. If we can remove the software fall isolation instrumentation, we should be able to significantly improve the performance of both coffee and virtual ghost. A second thing that we're working on is we're working on defenses against Yago attacks. So someone here, I believe, mentioned the Yago attacks. So Yago attacks, again, are attacks in which the operating system kernel returns bogus values to an application through the system call interface to try to trick it into doing something that it doesn't want to do. Our observation is that this is essentially an application trusting low integrity data. So by using standard programming language information flow techniques, we can basically check whether an application is doing computation on high integrity data and whether that computation is being influenced by low integrity data from the operating system kernel. So in this way, we should be able to build a system that formally verifies that an application is not vulnerable to these Yago attacks. We're also looking at the automated, we're also building a system that will automatically determine the efficacy of control flow integrity and code pointer integrity. So if you're following the control flow integrity literature, coarse grain control flow integrity, now there are now, sorry, there are now new attacks against coarse grain control flow integrity that allow attackers to compute, to perform malicious computation. So we now have this open question of, how good of control flow integrity do you need? So obviously, coarse grain control flow integrity where you don't distinguish between different called targets or different return targets isn't sufficient, but if you use a more accurate call graph, is that good enough? If you have perfect control flow integrity, otherwise known as code pointer integrity, is that good enough? No one knows, and currently the only way that we can solve that problem is by having four graduate students trying out attacks, trying to create new attacks against these systems. My goal, and I have some NSF money to do this now, is to try to build an infrastructure that given an application and a malicious computation that we might wanna execute can tell us, will this defense allow the malicious code, allow the malicious computation to be executed? Yes or no? And so in this way we can have a much more systematic evaluation of our defenses. Finally, I have a Google Summer of code student working on producing a tighter call graph or coffee. So if you notice, coffee is using one of these coarse grained call graphs. So now that's actually good enough for virtual ghost. It's actually better than what we need for virtual ghost. But if we want to defend operating system kernels from sophisticated memory, sophisticated buffer overflows, then we're gonna want something better than the coarse grained call graph that we're using today. And so I have a student that's working on implementing that for the coffee system. Finally, because Andrew Tannenbaum did it, I thought I would do it as well. Just tell you a little bit about what we have at the University of Rochester. So at the University of Rochester, we have obviously degrees in computer science, so we offer master's degrees in computer science and PhD degrees in computer science. We are very strong in computer architecture and operating systems and compilers. And I am now adding along with another faculty member, security expertise to our faculty. In addition, you might want to know that we have a new master's program in data science. So if you're interested in big data and being able to study big data along with some sort of application area, we now offer a one-year master's program in that. We have a small department with small class sizes. My operating systems course has about 25 students. That's both undergrads and grads. So there's a lot of personalized attention from our faculty. And if you like doing kernel programming, you get to do that in my operating systems course. So something to think about if grad school's been on your mind. So in summary, we built a system called VirtualGhost. VirtualGhost permits applications to protect themselves from a commodity operating system kernel. It uses compiler techniques, namely control flow integrity and software fault isolation. And this keeps higher processor privilege levels free. So we don't need the processor to have VMM extensions. We don't have to use them so they can be used for something else. And it turns out that by using compiler instrumentation, we are faster than current VMM based approaches. With that, I'll take questions. Thank you. Yes? What are your five screen questions? So, one of the reasons why I wanted to present here was to gauge how interested people are in this technology. And one of the reasons for that is because being a small school, I have a small research group. So I have limited developer bandwidth. You tell me? Tell me. Is this something that's interesting? Does it sound too wild and crazy? What do you think? Well, just because it's a lot of crazy doesn't mean it couldn't be. Ha ha ha ha ha. Sorry, I didn't mean to put you on the spot. When I said you, I actually meant all of you. I don't know, but I suspect that it's the sort of thing that some previous people don't know that it might be interested in, not all, but some might, at the very least, making it easy to integrate these kinds of things into a better platform for a reserve, which some of us are excited about. But also the mighty people who are interested in this sort of thing is quite busy. So again, maybe. OK. Question from the net? Yes? How much work do you support other architectures? Support other architectures? Good question. So basically what you would need to do is you would need to port the SVOS runtime library to another architecture. So you need to rewrite that. If you design, if you port it to the virtual instruction set properly, then that's probably about all you need to do. Sadly, that is not what I did in my research prototype, because I was in a hurry and kind of learning the low level parts of FreeBSD as I went. So I probably didn't actually. So basically what it is for the virtual instruction set port, I didn't create a new port, because that was more work. What I did is I took the x86 port, ripped out the x86 port parts, and put my virtual instruction set parts in. But if you do it right, then you shouldn't have to worry about that stuff. And again, it's only about 50. You also have to port the compiler parts. But again, this is only 5,300 lines of code. So it shouldn't be a lot of work. So. And the part of adoption among the DSPs is that one of the barriers to entry is that it's going to take advantage of this template that changes the way it works with programs, through the PID programs. It then makes it harder to maintain parity with other opportunities that you see with that's got some evens overhead. And if you could find a way to solve that, that would probably go a long way with what you think that's going to work more easily. Right. So to answer that, you're correct. One of the interesting things is that, so like for mobile applications, I think we can actually use automated compiler transforms to take existing applications and transform them into versions that protect themselves. For regular Unix applications, it's more difficult because we don't know which files are supposed to be shared and therefore not encrypted so that everyone can see them and which ones are supposed to be encrypted. So it is possible, I think, though, to create an API that the program just compiles to. And then on systems that support virtual ghosts, they do the whole encryption thing. And then on systems that don't use virtual ghost, the encryption is just a null operation. Yes. I think it's cool stuff. I was wondering if you could tell SGX and how it's something like it's kind of attacking the state of all this. It's attacking a similar problem, or rather SGX is starting to attack the same problems that we're attacking. So the first implementation of SGX, SGX1 was primarily designed to take small bits of applications and throw them into what they call an enclave, this isolated environment. And so to make things simple, they are maybe not simple, but to provide the security guarantees they want to provide, they did simplifying things like code and enclave can't execute system calls. So code and enclave, for example, can't receive a signal, can't have a signal handler. It can't do system calls, so it can't do read and write. Whereas in virtual ghosts, they can't. Now my understanding is that the group at Intel is starting to develop a new version of SGX called SGX2, which tries to fix some of these limitations because they're wanting to do what virtual ghosts can do, which is to protect an entire application. I'm not familiar with exactly what they've done and how far they are on that, but I know that that's what they're trying to do. Yeah, so the microsoft is attacking the internet, but they're not. Yeah, so I think you're talking about Haven from OSDI. Yeah, so one thing about Haven that they've done is I think they're trying to protect the application from the hypervisor. And what they've done for the operating system kernel, they've said, okay, we're just going to use a little OS, and so the OS is now part of the application. Oh, that works, but that's not a commodity operating system kernel, right? That's a library operating system. So... Yes? What do you do with the dynamic libraries? Ah, so we have not investigated dynamic libraries yet. I think that it would be simple enough to use cryptographic signatures so that when you ask for a dynamic library to be loaded, you can actually verify that it is the correct library to be loaded, but it's not something that I've actually thought about in detail yet. Yes, ptrace does not work. Any other questions? Is this of any value for my cloud server? Yes. So, so it's... So, so, yeah, so in a cloud server, what you're going to want to do is you're going to want to be able to isolate, first of all, in any server system, right, or any server in which you care about security, you want to isolate applications from each other, and you want to isolate them from the system software, because system software can be buggy, and in particular, the way that we build commodity operating systems, they're very large, they're very complicated, they're very privileged, right? But in addition to that, on a cloud computing system, what you'd want to do is you'd want to protect it from the hypervisor and from other virtual machines, right? Because you don't want other, you know, other people running on the same physical piece of hardware that can access your data or corrupt your data. So, yes, it's very relevant to that area. One interesting piece of future work that we could do is to look at extending the virtual instruction set to support a hypervisor, and that way protect from, you know, basically protect applications, not only from the operating system kernel, but also from the hypervisor and other virtual machines that could be running on the same hardware. So, when do the test program work? What prevents the OSO? Is there a problem with yourself doing something else that's... that you have to destroy with the program? So, in other words, when you execute a program, how do you know it's the right program? Well, okay, so, first off, the question is, is can the operating system kernel corrupt the application code? And the answer is no, because of what I showed earlier, where the application code is encrypted with the virtual ghost public key, so only virtual ghost can decrypt it, so forth and so on. Now, the second question is, let's say that you want to execute program foo. The operating system kernel says, I like program bar better. So, it executes program bar, now you want to talk to say program bar over a pipe. How do you know that it's foo... or you want to talk to foo over a pipe, how do you know it's foo and not bar? Well, that's why I said with the application key pair, that's why it's a public-private key pair. Because if you know the application's public key, or let's say you have a digitally signed certificate, where you know, okay, this is the public key of this application, if the operating system kernel starts up another application and you start talking to it, you can actually authenticate that the application you're talking to is the one that you want. So, if you know foo's public key, then you can... if you're talking to bar, bar won't be able to authenticate itself as foo. So, basically, if I set a certificate in line type SSH key gen, and the code and the kernels are compromised, they will have to somehow check that I'm right, and I want to do it by a time limit. Right. So, now in the case of SSH key gen, what's going to happen is, like, let's say the operating system kernel runs another application. That application's going to have a different public-private key pair, right? Because the operating system kernel doesn't know what your SSH key gen public-private key pair is. So, if you then run your SSH client, it's not going to be able to use the authentication keys created by the operating system's SSH key gen because it's not going to decrypt properly. Right. All right. Now, there are issues with, okay, if I'm the user and I'm actually typing on my keyboard, how do I know that the application I'm talking to is the right application? How does the application know it's talking to me, the user, as opposed to, like, some other program that the operating system is set up to masquerade as the user? That's an open research question, which I have a few ideas for, but that's definitely future work. Yes. You mentioned, like, so the applications aren't specifically key, but there's no way to improve the application because they're just only known by the... I'll hypervisor, but I... Virtual... Virtual... Yeah. When the kernel load starts up in the virtual load by the component that's within that kernel, right? Or it's running on the outside. I mean, is it being indicated at no point in time that the kernel actually knows it's a key? I mean... So, if you have... So, in a complete implementation, including one that can automatically load new kernel code, what you would have is your bootloader. You'd use trusted boot to make sure you have a bootloader, which loads virtualGhost, which loads the operating system kernel, which then can start up applications. Yeah, exactly. But I mean, it could be the case that if the hardware itself sort of would be compromised, it could just start up an operating system that behaves like it's an operating system that runs virtualGhost. In fact, it's just plain 3Bs that happens to know it's a key. If you corrupt the processor, the actual hardware, then, no, we don't defend against that. That's not in our attack model. So, yeah, you're definitely a cook then. So... How would this model protect against things like keyloggers? Keyloggers? Ah... Okay, so I think virtualGhost provides the infrastructure to protect against something like keyloggers, but we don't actually have that implemented because then what you're talking about is, okay, how do you use these features? And that becomes a whole bit more research, right? So one way to potentially do it is to have, essentially, to have a kernel driver that's trusted. So what you do is you create kernel drivers that can have ghost memory. And if you can do that, then an application, when it starts talking to a driver, can actually authenticate it. So basically, you have a few drivers that virtualGhost gives keys saying, okay, this is the driver for the keyboard, this is the driver for the screen. So applications can basically get their public keys from virtualGhost, authenticate them using those public keys, and then talk to them over an encrypted channel. That's one potential way that you could do it. But... That sounds almost like the next question. Um... Yes, but not to the same extreme, because you don't have your file systems and you don't have your networking subsystem built as isolated drivers. And more importantly, what I would like to do, so this is something that I've been thinking about, is I would like to keep these drivers in kernel space, because I don't want to have to run the user space. So, and that's an additional, you know, that's an additional challenge of, you know, can we actually do that? So I think the answer is yes. We'll be off the road on a specific address to pass on. So I know that there's a few steps in the management system, certain steps, you know, on the trust zone, I think that a few steps are attached to the mechanism, and so the question is, if they have hardware support that they can find in, and try to use it as well, do I have to... Yeah, I think it's basically a world-end, a trust-based program. And so the whole premise, if you see the whole premise of this, is that the program is only accepting trust-based inputs, and then this virtual ghost makes the program and trust that, once it has been implemented, that the data is not being modified, and it's coded, but it breaks down some of the trust-based inputs. So the thing is, is that what you... So in order to build a complete system, like an end-to-end system, you have to deal with issues such as, how do you know you're actually talking to the user through all the pipes? How do you... You know, when you rely upon another piece of software, how do you verify that it's doing what you want it to do? And I think that there are solutions to those. You know, proof-caring code might be useful, or just using public-private encryption, where applications are actually authenticating each other, as opposed to just trusting each other like they do today. But yes, it's an open problem. It's not clear exactly how to solve them or what the best way is to solve them. Yes? In your question earlier, there is always considerable interest in the previous D community on the security issue. And then this is certainly one of them. Have you been able to analyze this approach with all the other security measures that are already in the previous D family? The jails, CAPSICOM? They are addressing completely different issues. So what CAPSICOM is doing and what jails are doing, containers and all that stuff, is that they are isolating user-space applications, and they're trusting the operating system kernel to implement this correctly. So... And it's a very valuable thing to do, I should add. I used to build mandatory access controls to do this, you know, probably about 15 or 17 years ago. Okay, that's a long time ago. Anyway, so that's a very good thing to do. However, it's trusting the operating system kernel, and what I found when I worked for Argus Systems Group, which made extensions to Solaris and AIX with these mandatory access controls, is that when attackers couldn't attack applications, they attack the operating system kernel. And the operating system kernel, it violates all the security design that we know we're supposed to be doing. It's large, it's monolithic, it's overly privileged. You know, and so what my research work has focused on is trying to address this challenge, and I think that the commodity operating system kernels are not designed properly. They're susceptible to buffer overflows, they're susceptible to rootkits, so forth and so on, what can we do about it? So basically it's orthogonal problems, is what it is, I would say. So I think that's it. If you have any other questions, please feel free to come by and chat with me. Thank you.
In this talk, I will present our research on protecting FreeBSD applications and the FreeBSD kernel from attacks. I will briefly describe the KCoFI system which protects the FreeBSD kernel from control-flow hijack attacks (such as classic buffer overflow attacks) and the Virtual Ghost system which protects applications from a compromised operating system kernel. Both KCoFI and Virtual Ghost are built using the Secure Virtual Architecture (SVA) (an LLVM-based infrastructure for enforcing security policies through compiler instrumentation and hardware techniques). In this talk, I will present our work on using the Secure Virtual Architecture (SVA) to protect FreeBSD applications and the FreeBSD kernel from security attacks. SVA is an LLVM-based infrastructure that permits us to use compiler instrumentation techniques to enforce security policies on both application and kernel code. In this talk, I will briefly describe how we used SVA to implement KCoFI: a system that enforces control-flow integrity and code segment integrity on the FreeBSD kernel to protect it from control-flow hijack attacks. I will then describe how we extended KCoFI to build Virtual Ghost. Virtual Ghost protects applications from a compromised operating system kernel. I will describe how Virtual Ghost uses compiler instrumentation to prevent the FreeBSD kernel from spying on and corrupting private application data and how it prevents the kernel from maliciously modifying application control flow (while still supporting features such as signal handlers and process creation).
10.5446/18647 (DOI)
I've enjoyed the conference so far. My name is Peter Hessler. I'm with the OpenVC project. And in my day life, I am also a network administrator, system administrator for a managed server hosting company. And so I'll be talking to you about making sure my screen server doesn't start. About using routing domains and routing tables in a production network. This started several years ago when I was working for a company with Reich Flauter, another OpenVC developer. And we needed to solve some problems for that customers had. And we did the development of this. And then I was one who was going to the customer and implementing it with them and doing the support role and doing a lot of documentation for it. So I'm going to talk to you about some of the lessons I learned and how you can set your own routing domains network setup. So first off, let's have some definitions. There's two aspects. There's routing tables, commonly called R tables, and then routing domains. And these are different but very related things. First is the routing table. So in your traditional Unix system, in your traditional router, you have a single routing table that contains all of the routes, all the network routes that you know about and how to connect to it. And so most systems have one and only one available. In OpenBSD, with routing tables, you're allowed to have multiple routing tables. And these are utilizing the same interfaces. So your firewall has four Intel gigabit cards. So you have EM01, 2, 3. And you're able to send your traffic over all three of them as necessary. The IP addresses in the routing tables cannot overlap. You have to assign them and they have to be globally unique. You can have a different path to get to the end destination. However, multiple routing tables can belong to a single routing domain. And in the next slide, I'll go into what a routing domain is. So this is most commonly used for what's known as policy-based routing. The most common example would be you have an office with two links to the internet. You have a DSL link and your cable modem link. The DSL link is very low latency. So it's very quick, but it's low bandwidth. So a large download will take a long time, but each packet will be sent very quickly. And we're on the cable modem. It will be very high bandwidth, but also very high latency. So each individual packet will take a long time to transmit across, but you can get really high data rates. So if you're just downloading an update or you're viewing a web page or whatever, then most of your traffic will go over the cable modem. But for your voiceover IP phone, each of the audio packets is very small, but needs to be sent very quickly and very reliably to the other side. Otherwise, you get the weird delays and possibly echoes and people over talking each other. So you use an alternate. So you use your main routing table to send most of the data over the cable modem. And then you simply mark the voiceover IP traffic to go over the regular DSL link. So it's much faster. So a routing domain. What this is, this is a completely independent routing table in different instance inside the kernel. This allows you to have, as I say in my example, is like the 10.00 network. You can assign it multiple times. And you can have completely independent networks available for this. An interface, however, can only be assigned to one routing domain at a time. Because when a packet comes in, how else would you know where to route it and how to handle it and which routing domain is it for? A routing domain always contains at least one routing table. For most people, they're going to do a policy-based routing only on one routing domain. So most people do one or the other. It's not common to mix this in a production environment. Bit of the history, the first edition of Routing Domains was added in OpenSD 4.6 in October 2009. Originally, it was IPv4 only. IBv6 support was finally added last year in 2014. And the main reason it took that long was my fault because I just slacked off on doing the work and let the patch rot for about a year. And then we get into a few more definitions. VRF Lite and VRF are what's commonly known in the networking world. These were originally Cisco definitions that Juniper and the other larger networking vendors had started using. So VRF Lite is simply multiple routing tables or multiple routing domains, sorry. And this is generally done by hand on a single system and is designed for more of a smaller entity that has, will have one or two routers that need to do a lot of different customer interconnects into the system. VRF is also known as MPLS. This gets a lot larger. This is interaction between BGP, LDPD, and usually requires larger networks. If you were in Ray's talk earlier about OpenBSD and virtualization networks, you would have seen talk about overlay networks and underlay networks. MPLS is often used as an overlay network on top of someone else's underlying network. And so a common example would be if you are a large regional or even national ISP, you have all of your routers at different points of presence within a country. Your customers connect to this. And then these will route your traffic over on top of possibly someone else's network. So that way, it still stays within your control, but you don't have to own all the physical links between the Atlantic and the Pacific oceans. When setting up a routing domain, so a rule in networking is that you must have a route to an end destination. And if you don't have a route, then the package just gets lost and usually is dropped. For small organizations, you will have a default route going to your main gateway, going out to the internet. For a medium-size enterprise, you can have a BGP feed out and you'll get a full BGP feed. And that's effectively a default route. But when you're doing routing domains, a very common mistake that is done is to forget to create a default route within that routing domain. In OpenBSD, when the packet arrives from the network, we do a check. Do we have any sort of valid route for this packet to be sent to? We do that check extremely early on, even before PF inspects the packet. If we don't have a route, we drop the packet for performance reasons. Now, a very common use of routing domains is to the packet comes in, use PF to steal the packet from that routing domain, and you spit it onto another routing domain. And in this case, that will fail. So what you really want to do is set up a default route in the routing domain as soon as you create it. In my experience, I would say about 60% or more of all problems seen in production networks were simply forgetting to create a default route. We're simply creating a valid route for the destination system. So simply just set up a default route and you will avoid a lot of problems. Yes? Are you suggesting to set up a default route in a routing domain, even when it's a closed network with no default, with no real egress? Yeah, so the question is, so on the real network side, you have a full feed of BGP and there's no real default. But on the routing domain, should I create a default route anyways? The answer is, while you don't strictly need to, all you have to do is create a route that exists for all of the destination networks. You can just do that, but it's extremely common to forget to update it. It's extremely common to not pay close attention. And so especially if you're doing this by hand and in a VRF light situation where you're not doing BGP routes across the networks, that is what dynamic routing is for. But if you're, so VRF light is, by definition, without dynamic routing in the domain, in the routing domain. Now in my examples, I'm using a default black hole route, which is perfectly legal. And that actually allows the packets to come in, be processed by PF, and then move to wherever it's supposed to go. I'll get into a bit more of an example later of situations where I am doing just simply a very closed off link for the customer, for a customer network. And then from there to there, a default router for them, and then from there into a different routing domain. So from there, it's basically slash 24 within this side. Yes? So what you just said is if you have no default route at all or have a reject default route, that ends up doing something different? Yes. So there is a different. So what the check is, is there any type of route that exists for this packet? It is just some weird word of the implementation. It is a quirk of the implementation, yes. That is correct. This is an optimization for performance that was done. It was done a while ago. It may be worthwhile for us to reinvestigate to this decision. However, in the current shipping code, it is what it is. So even a default route, just pointing to local, I'm assuming. Correct. A default route, anything that, so if you do an OpenBSD support to a route get, and then you give it a destination address, if you do route get any, and it shows you anything that is not simply a, if it returns any value, then that's accepted. The real which next hop should be selected is done later. I thought you were telling me that we have a default route in pointing to an actual gateway. No. No, no, in my examples that I'm using later, the default route is a black hole route to local host. So that is. It has to be a route entry. Exactly. There has to be a route entry that is valid for this destination. So it can be completely bogus. It can be completely bogus, and it usually is. And what you can also do is you can just simply create only for the destination networks they're allowed to talk to as well. So it's just as long as it's something valid for the destination. That's all that matters. So yeah, the end, again, in my experience, it's, when you're creating this all by hand, and you're managing this without any sort of dynamic routing, it gets very, very confusing of which routing domain am I in? Where are the routes pointing? And so it's just much simpler just to add a simple default, whether it's a regular default that makes sense for your network, or if it's just something bogus that where you just black hole everything. Either option is perfectly legit. Maybe just change. Actually, I've been using OpenBSD for a while, but I've used the more in PF site using that for instance, moving with the reverse path validation that we get since I've been integrated in the network stack really early. Right. So that part, this is way before it even touches PF. This is done extremely early on in the whole packet flow within the kernel. So but I will get to more examples and show that a little bit more later. So yeah, so some of this can get very confusing. And it's which routing domain is the packet in at any given time that determines how it's being routed. And it is important to keep that in mind. So for a lot of people, they're not used to using a system with more than one routing table that's installed and available. And for a lot of users who are not familiar, this is a very easy thing to forget about because they'll just look at the standard routing table and go, but I have a route. Why isn't my packet going out this way? It's in a different routing domain. And so you need to look at the writing domain it came in on and then make sure that your tools that you're using to check and verify are utilizing the correct routing domains. And so it's a very common situation is that you can have completely independent networks going through the same router. And it'll come in on an R domain and it'll go out onto a different interface in the same R domain. But what if you want to move it to a completely different routing domain for whatever reason, in that case you would use pf. And I'll show an example of that a bit later. So right now, we're just going to show you how to set up a very basic example. In this case, we are taking the interface em0 and we're going to declare this is going to be part of R domain 1. By default, in OpenBSD, every single interface is in R domain 0. The reason why we set this first is because when you change the interface's routing domain, then what do you do? Is this still a valid IP address for the system? Is this the configuration that should be? And so in OpenBSD, when you set the routing domain, it will erase all the existing configuration on the interface, remove all IP addresses from it. So always set the R domain first. And then you set the IP address there. It is generally recommended that you want to create a local host IP within the correct same routing domain. In this instance, you can see that I have set up a default going to a gateway system, 10.0.1. And then here I am executing the SSHD daemon. And it's being started in the routing domain 1, which is defined here with dash capital T. What that allows you to do is you can start any arbitrary application within a specific routing domain. And so all incoming connections, it can receive connections from that domain. And then all of its outbound traffic is sent over that routing domain. So this can be used to set up, for example, a management network that is not accessible from the regular part of your network. And so this is the output that you see from it. You can see right here this declares that we're in R1, everything else looks the same. Again, here, R1, everything else looks the same as you would normally see. And then here we take a look at the net stat output to see the routing table. Again, you set minus capital T1 to declare which routing domain you want to look at. And then this is the standard output of this. You can easily look at it and understand it as you would normally look at it from an administrator perspective. And then this is an example of some PF rules that you can use. The first rule here is any traffic that comes in and being sent to this IP address, we want to move it to the routing table number two. In this case, it would generally be part of the routing domain number two. This is how you would move traffic from one routing domain to another one. In this case, it's not doing any address rewriting. So the destination and source IPs would need to be unique on both sides. Otherwise, the systems will get a little bit confused. Not the OpenBSD side itself, because it understands which destination it is. But once it leaves OpenBSD, it goes into just the regular network. And so the network itself would have no knowledge of which routing domain this is. So the question is, does PF create the appropriate stateful return around the ACR? Yes, PF does create all the correct stateful rules. So you don't need to do any crazy tricks for the return traffic. Yes, that's correct. Here, you're able to do an anchor. And you can say that everything within this anchor applies for any packet involving routing domain number 15. And then you just do your standard rule set. And you don't need to worry about receiving on this interface or whatever. It's just this block only applies to that routing domain. Here is a slightly more complex example. Pass in, that was received on this routing domain. We're going to do a redirect to the localals4, generally the loopback address. And send it to writing table 4. And that just steals the traffic and moves it over. And then same thing on the last rule, doing an outbound NAT. So as I mentioned, we ran this in production. And as we were running in production, we saw a lot of interesting things. The first one is the route exec. Originally, as this was designed, it was simply an internal tool for us to help work on the development of this. So we can later on add support to a lot of the utilities. We discovered this was an amazingly useful tool that we can just use and should be made a generic option available. There was a short period of time where we made a push to add very specific routing domain support into all of the tools that had any sort of access to the network. Like, for example, adding R to main support natively within SSH or within various other tools. And we later realized it was much better for us to add it simply within this route exec command and just use that as a tool to go forward instead of trying to add it for every single daemon or tool or whatever. So we decided that only the specific network tools that have to know about routing domains, basically anything that sets or checks a route has native support. But everything else, you really should be using route exec, if you can. And yet, for OSPFD, our domain is a global setting. That is a different thing. That was because it was much easier to deal with the route decision engine. And that is something that we definitely need to expand on. OK. So in other words, don't use open BSD for cross routing domain use cases yet. Or for multiple R domain use cases. So in that case, what I've actually done usually is run multiple OSPFD instances in each one inside their own own routing domain. Another thing is that OSPFD, because it looks at all the interfaces you're doing, that you simply just keep this with that it's not supposed to do anything cross. OSPFD should not do any cross routing domain stuff. BGP can cross pollinate. Yes. BGP is also used for full MPLS. So it definitely has to know about this. Yeah. Yeah, exactly. Right. So as I mentioned earlier, when you add a routing domain to an interface, it erases the IP address configuration on the interface. We looked at that as a way to avoid people leaking out information from their network. Because on a routing domain network, just because you have 10, 0, 0, 1, doesn't mean that has the same meaning within all of your routing domains. It may not. So the first thing it does is it erases the IP address configuration. However, the interface routing domain is independent for the physical interface and any virtual interface sitting on top of it. So you can have the physical interface, EM0, in a routing domain. You can have the trunk sitting on top of that in a different routing domain. You can have VLANs sitting on top of the trunk in a completely different routing domain. You can have multiple VLANs all in their own routing domains. And there's no issues at all with this. Each of these are real full-featured first-class citizen interfaces. So there's no issues with mixing this. So you're able to do, you have your 10 gigabit link into the switch. You have all your VLANs coming in. Each VLAN is simply marked on a different routing domain. And then you just process it as normal. Carp is a little bit of a special case because carp is half of an interface because of the design of it. And so carp needs to be in the same interface as the parent. But that's the only restriction because of how carp behaves on the network. Is there any difference in behavior when the trunk members are in a different R domain than the lag interface? Ooh, I have not tried that. I'm not sure. I believe there should be no difference in behavior. But I have not specifically tried that. And that sounds interesting. I think I'll try that when I get back to the office. Yes? If I wish to enforce that certain statements or whatever are always started in a certain routing domain, it seems to me like I would perhaps want to do some force this login class to be in this login domain, in this routing domain, in login.com, or something like that, instead of RATSG. I heard that whatever in maybe my RCDETD, how about the canonical way to manage where things are? Sort of like what the daemons are, right? So the question is, how do I, what's the proper canonical way to define where a daemon is being started, which routing domain it's being started in, at boot, or just when you're just running it as a program? Yes. That's too full. Yes, it's too full. So the answer is, put the RATSGETD command that you want with the daemon you want inside RC.local. There is not any support within login.conf to enforce this force visited class or for a specific user. And unfortunately, there's a lot of very ugly problems to try and solve by putting into the RC.d subsystem that have not been solved yet. And so because, as an example, so like trying to do OSPFD, trying to start this, it's how do you, is there a standardized naming for the configuration files? Is standardized naming for the control socket that you would use OPCTL to talk to it, and that there is not a well-defined mechanism for this that exists in the RC.d subsystem? So you would need to specify that on either the command line or in the configuration files for this. So I'm thinking, yes, there is in the last year or two years since the RC.d stuff around for routing domains. Yeah, you just create your own custom RC script. And then you retain the ability to start this. But that only works if you want to start it once. Yes, OK, I'm sorry. There is two parts of this answer. So if you only want to start the domain once, and you simply want to move that into a different routing domain, then yes, you would start it with the, I believe you're not able to specify a prefix command for in the RC.d subsystem. So you cannot prefix it with, start it with, RoutingZec. If the tool does have native support inside a configuration file or as an option, then yes, you can do that in the RC.d subsystem. What I mean is, do it, sorry, just say the Linux way, copy ETCRC.d OSPFd to OSPFd RoutingDomain2, and edit that script. Yes, OK, yes. You can simply just copy it over. That would work. It's ugly, and I'm not sure I would call that as a, it's not part of the framework, not strictly. You would have to do more than simply just edit RC.conf.local. And so anything within RC.conf.local, or with the RCCTL commands, that would be definitely within the framework. I suppose at this point it's kind of a, how do you define things? But yes, you absolutely could copy the RC.d script, edit that so it starts up. Or you can, generally what I have done has been to just put it into RC.local and specify it as necessary. Is there any support for enforcing that something in a particular writing domain never be able to be used in another writing domain for a specific domain or such? The motivation being that I could use this as a mechanism for compartmentalization of things on the network. Yeah, so the question is, is there a guarantee that the traffic from one writing domain will not move to another writing domain? Yes, and also from, as a user space thing, that it forces a process to not be able to fix anything off with the different writing? Yes, that is a strong guarantee that it's provided. So the writing domain is stored both in the process and within the, and within the writing table that's using. And so a process in writing domain zero can move to a different writing domain. However, a process that is in writing, in any other writing domain, cannot move outside of that writing domain without root privileges. So if it's running as root, then yes, it can move away. But if you're worried about the quality of software and trying to escape this sort of thing, then you shouldn't be running as root. So that's a very clear answer. And yes, within the writing table, within the writing domain, it cannot escape to another writing domain. You are allowed to tag the traffic and move it over with PF. That is an administrative decision that you've made and you'd have to load in the rule set. And you can only load in the rule set as root. And again, if you're root, game over. So yes, I have used this in the past, especially for a management domain. And so you hop in with SSH from the outside. You hop into a machine, or you can SSH out to another management network. So it's much, much more difficult for traffic to accidentally go into that destination or to cross that boundary. Exactly. So all of that stuff is hardwired because I wanted the opposite thing. It's useful to be able to run a single server that is just bound to a listening port and have it accept the connections from any writing domain. As long as the accepted connection ends up pointing back to the table, this is useful when you have, say, two carriers. Correct. And you just want the return packets to go back where they went without changing any code at all in the server. Right. So that is precisely what this PF configuration does. But you have to use PF for that? Yes, that is correct. You have to use PF to use the classification part to move across a different writing domain. The isolation is guaranteed and is hardwired within the system. And you are unable to escape it without using something outside of the system. So if you send the traffic out and it goes through a switch and it comes back in, then it's now on the writing domain that it was received on. Or you can utilize PF to do this. So the end. So you can use PF to do the inbound routing. The inbound routing all has to end up in one routing domain? No, the inbound routing is put into whatever writing domain the interface is on. And then once you receive it, then you can move it. So in this example, there's no in or out direction. So I can have my port 80 web server just listen to port 80. And it gets connections from all routing domain? Yes, that is correct. With this rule, you would just add proto TCP port 80. And then that's exactly what it would do. This would send all traffic. I'm missing that. The inbound traffic, you're using that rule to shift all the inbound traffic to one routing domain? Yes, that is that. So in this rule, we pass traffic in any direction, in or out, from any IP address to this IP address you specify 10.4.0.4. And when we receive it, we move it to routing table number 2. And in this case, routing table number 2 is defined within our domain number 2. And so this would move all the traffic from everywhere that's been received into a single. And because it's on routing domain, any is the sub part that is not printed there. So that's what the top rule is doing. And so you can use this for a web server or a monitoring system or a backup system or anything else that you want to be widely accessible to all of your systems. And of course, if you want to receive traffic from multiple routing domains and move them all to the same one, then you need some way to guarantee that this destination route is sent to you from all of those routing domains. And this destination IP address is sent to you. It is delivered to you from all the routing domains. And I'll go into an example a little bit later about how you can deal with this if it's not a unique address. So for example, if you have, like in this case, none of the routers on the outside of you are allowed to use 10.4.0.4, because otherwise traffic will be delivered to that system. But if that IP address is utilized by someone on one of those routers, I'll show you an example a little bit later of how you can still receive that traffic if they send it to a different destination IP than whatever one else is using. The other thing is that when we first added the support, we only had support for the new routing domain. We received traffic, and we moved it to the new routing domain. And this, we ran into a little bit of problem with the FTP proxy command, because FTP proxy needs to set up rules in going in both directions. So it has to know the old and the new routing domains. And when we discovered this, the traffic was coming in on not the default routing domain, and was also being sent to not the default routing domain. So that created some, so we had to add support for this. In a later step. So as I mentioned before, the standard rule for running a service in multiple routing domains is either do the inbound trick or simply just run it again. If you rerun NTPD again, you're going to have very interesting problems. I started up five NTPDs in different routing domains on my laptop. And after about five minutes of the wall clock, my laptop was now in August. 30 minutes later, I could have retired if I just utilized the year on my laptop. It went totally crazy. So you really don't want to do that. So we did add actually very specific support within NTPD to support routing domains. You can specify it is listening on in one set of routing domains or for an N routing domain. You could run set that as many times as you need to. And you can specify the servers that it's pulling in time from in arbitrary routing domains as well. So and those are per line options in the configuration file. So you can have listen on star star. And you can have server A is in Routing Domain 1, server B is Routing Domain 2, server 3 is in Routing Domain 15. And so that way you can just pull it in without having to do strange network tricks that unfortunately don't work in PF because the destination IP address of number 2 also exists as a client in Routing Domain 35. And so then you could not simply just classify it and move it around. So then we also, after the first release with this, we discovered we needed the ability to say on a routing domain, we didn't care what interface it was on. We just wanted to make sure that anything within this routing domain was the only thing that we selected. So in the original release, this command was not possible. When this command was really what we wanted, and in order to support this command, instead of just doing the very simple three rules, it was about four pages of rules that we had to create on the machine. And it was very error prone, very easy to mess up, because single typo, and you press 4 instead of 5 in one line, and then suddenly traffic's being leaked everywhere. So we discovered we needed to add that support later. So now I'll get into a bit of an example of what sort of, an example network that you can create with VRF Light and with just a pure Routing Domain network. This was a very common scenario that I saw in a lot of organizations. You have a management network, you have two outbound Routing Domains, you have backup server, you have monitoring. So this was the design. This is a very simple design that I modeled on several of the ones I saw. So you see up at the top, the connection to the internet was using Routing Domain 20 and not the default Routing Domain. We have the customer orange in 208, customer pink, in 204, customer blue in 207, monitoring server, backup server, and different Routing Domains as well, because we want to enforce that no one can get to them without being extremely specific with inside the network. This sort of design was requested by some of our customers. We were not able to convince them that was slightly overly complex. In this case, the individual customer networks, you definitely want to have in independent Routing Domains. This would be a link from your central co-location hosting provider into their direct network. And as you all know, everyone simply runs 192.168.0.1 as their primary network. And you want to make sure that this traffic did not get leaked from orange to blue. That would be very bad configuration for you. I also made it a little bit overly complex to show you some of the rules that you can create for it. So this is showing for customer pink. These are the configuration values that you would just simply use at startup. You see here you define a VLAN interface. It sits on top of trunk number four. It doesn't matter at all what trunk is set up or what the trunk's parent is, as long as they're valid in some way, shape, or form. We define the VLAN. Then we define the Router Domain. We give it a group name just so it's easier for us to, when we're looking at a social label that we can use both in the output of if config and that we can utilize this within PF as well. Find the IP address assigned to the machine because this is dot one. It's very likely to be the default gateway or a gateway of some type for the customer. And then we also created the local host, the 127 address. While this is not strictly a requirement, I find it much easier to think about machines if I'm able to somehow encode the routing domain within the interface name. And so it's just a little cheat sheet that I use. Let's try and do that. So that's why I called it localhost204. You see here that I create the standard localhost reject route. And then here I create a default route that's a black hole. The customer only has the single slash 24 behind it. And there's no reason to send any traffic back over there. Any other default routes that I create would either be a pathological case or I just send some machine back across the link. And any network packet that I'm not actually stealing and sending across the wire, sending to another routing domain, we'll just ping pong back and forth. And obviously, that's a bad thing. This is the pf.conf that I have for the customer. So you see all the packets that are received on routing 204. We have a default block list. We pass in all the traffic that's coming in from the pink customer. And we just want to accept them and not do too much filtering from them. We want to pass ICMP in both directions. We all like ICMP using ping. Tracer route's very nice. We want to pass from the monitoring network. It's a special network. It has hooks into all the different routing domains itself natively. So we just pass it into the pink network. So I did a little bit of shorthand here. The p colon net is the same as the pink colon network command. I just did that so it just fit on the slide. You see here, pass port chp to the backup server, port 873, that's the rsync port for those of you who don't have all of those port numbers memorized. We're sending it to the routing table number six. So the traffic is sent there directly. And we are doing an outbound net rule being sent to external IP address that we have defined for them. And so you see here, traffic comes in from customer pink received onto our firewall here in this box. This is being sent out to the internet on writing domain number 20. And then down here, because it's not being received on writing domain 204, we have to move it outside of this anchor block. And so in this case, it's just simply ICMP because it's just sending just standard ping tests from the monitoring system on any of the writing domains to the network. And we're redirecting it to the routing domain at 204. This is the output of the network table. As you can see, it looks fairly standard to what people normally use. And this is a black hole route. Because we have the default route, any traffic that arrives on this interface is then it passes the first check, goes onto PF. As we see here, that the traffic has to be moved out to writing table number 20 here on the bottom rule. If it doesn't match, then we can't steal it. And if PF does not steal the traffic, then it's simply dropped on the floor as a standard black hole. And then again, same thing with customer orange. You notice that I use the same IP address for both orange and pink. And that is to illustrate that the traffic that comes in is they're independent from each other. And traffic from the pink customer, even when it comes in, hits the firewall, the firewall has no routes to the orange network within that routing domain. So it doesn't know how to get to it. So it's not possible for it to escape and move over. Although, because of this, you should keep in mind and make sure you also don't add routing within the switches between the customer and the firewall. Because if the switch gets it and moves the route, then it's not in the routing domain. And again, we see here just the standard stat output. Then customer orange, same thing, same thing, all very, very similar to each other. So as I was discussing, use the anchors. It's a very nice way to segment the rules that's from each other. Anchors within PF allow you to do either you load your own rules into the anchor from a program or from a PF socket. You can also simply, as I described, just add your own rules there directly. And it works as an and statement. So you just say it's everything on this routing domain or everything from this network or whatever arbitrary thing you want. You don't even have to say. You just say make an anchor. And as I mentioned earlier, those three lines from my first slide showing the anchor and on our domain feature, we were able to reduce the three lines instead of being about 60 or 70 with fairly intense commenting and descriptions of what was happening within the network. We also needed to keep in mind of how trying to cross routing domains work, because the routing domains itself only exists within the single firewall system. In a VRF light situation, it's only on that one machine and does not exist on any other machine. They have no knowledge of this outside of the internal kernel structure. Yes? The MPLS implementation used routing domains internally to guarantee segregation? Yes, so the question is, does MPLS use routing domains to guarantee segregation? Yes, absolutely. That is a core feature of MPLS. Yes? Yeah? And then our MPLS? Right, yes. In our MPLS, yes, absolutely. And then here I'm just showing the diagram again. So after all those slides of each of them output. So yeah, so traffic from customer orange comes in, is in the routing domain, and has to be moved, or stay within that routing domain. It does not get moved for you. Right, and so that was everything I just described earlier. So this is a special thing just for the monitoring. We can see here that we have the monitoring servers in the routing domain number one. And so any traffic received on the routing domain number one, so in this case, it's traffic received from the monitoring network. We do some examples, and this shows you how to have the different destination IP addresses. Because as we saw, all three of the customers are using the same address range. And well, I can see some people looking a little bit concerned about this. And unfortunately, as a reality, you don't always have full control over the entire path. And what this allows you to do is we declare that we're going to take this 198.19204 subnet and declare this as this is the destination for all the IP addresses that exist within running table 204, which I don't quite remember which one they are, that would be in pink. So all of pink has a different set of IP addresses. And we use the cool trick here with bitmask, so we can just run one rule that covers the entire network range. And because it's slash 24, the last octet will simply be copied from one address to the other. So you just have a one-to-one mapping, so it's much easier for you to think about and to create the rules. Yes. So if I had two routing domains, two reciprocal rules, like this, I couldn't change it. So you can achieve full one-to-one NAT for 10 slash 8, NAT to 10 slash 8? Correct, yes. Yeah, you can use. So this rule can be used exactly to do a one-to-one mapping between any arbitrary size of net mask provided you actually have that space. So if you have two slash 8s and you want to NAT one-to-one and you don't want to give them back to IANA or RN, which you probably should do, at least one of them. So this is an example to show you how you can deal with that sort of complexity of how do you get through across when you have conflicting addressing. And again, we want the monitoring system to reach the backup system. And we're actually very happy for monitoring to do anything they want in the backup system, because if you get to the monitoring, there's going to be a lot of problems in your network anyways. Yes? So if I want, so if I have two customers, they both use 1.8.1.6.8.0.1 as the server I'm trying to monitor. Yep. So I just declare fake addresses and use pf to resolve the fakeness? Yes, exactly. That's exactly what this is demonstrating. And you can use any arbitrary IP address you want. I recommend something you control on your network. But yeah, you can use pick anything and it just does a redirect or a NAT or whichever method that you want to use that makes sense for them. Yes? My understanding is that I was lacking, so I'm sorry. I think that's a good sense. If I want to have all the, so first of all, can I do like to bridge the, to make the traffic from, can I? Is that a routing domain here and a routing domain there? Yes. And I would like to have some user space program be like a, for the default racks of this routing domain is like a divert to for some program here that then goes to this other routing domain. Relay be, perhaps, more like some firewall type thing in user space. How would I accomplish that? You can do that in a couple different ways. One way is to force it go out one network, go through, and even like an independent device goes out one network, goes through the device, comes in another network interface. Another method is that you can do a redirect rule to like a local host socket and the local host port that it's received on there. And then you have the, you know, the outbound part of that. And then you just take that traffic and move it out. You can use tag the user declaration within PF. You can use the port, source, or whatever information there. There are, there is a, there's several options for that that's available. If you are writing the program yourself, you can actually define that within the program. You would need to get write access to the PF socket, which user requires root access. And so you can have in a privileged separated daemon, you can have a root process that only can, that only sets up the rules that you need. FTP proxy is, is a program doing exactly that. So you can simply take the FTP proxy, the FTP proxy code and then add your own content scanner or whatever, whatever that you need to do within it from there. Yes, right? You know about my patch for the ether. I know about your patch. That somehow never got into our community. So for political reasons, I don't know. So I had a patch where you can basically connect two VEsers in different hard domains. Yes. Like a crossover cable, basically. Yeah. Internally. Because for me, it was neat that I can connect our domains on layer two. So I could run DHCPD in one hard domain, and DHCLIENT in the other, for example. Right. And just work that. You can do trunk over it. And people said, well, you can use bridge, but actually bridge doesn't work. Correct. Yeah. Yeah. So now we're stalled within our community, passing it happening. And what do you think is, from your point of view, is it neat to have something like this? So I remember that patch. And I liked the concept of it. I think the, so for those of you who couldn't hear the audio, the Rike was describing that there was a patch that was not committed that allowed you to do a interface to interface connection that would cross writing domains in a nice way for layer two, or without having to go through the network, having to go through layer three, and requiring PF for that. I think the source support is important. Maybe it doesn't need to be its own device. Maybe bridge should learn how to do this. I'm not 100% sure. It would need a V-switch for that. We don't have it in open. Yeah. A V-switch is definitely, I think, would be the best solution for that one. But yeah, as Rike just mentioned, we do not yet have a V-switch with an open VSD. So that would be. We don't have a car, and we are longing for a spaceship. So. So. No, no, no. This would be a nice thing to have. It hasn't been done. If anyone would like to write this, please talk to Rike. Patches are always, always welcome. Am I out of time? You're out of time. Sorry? Officially, I was just asking you were out of time. I don't know. I don't remember what time I am. I was just getting a student therapy. He's out of his URI on the dot. OK. It's a half hour break. Yeah. So you can write another. OK. I'll talk very quickly about full VRF. So full VRF, also known as MPLS, multi-label protocol switching, requires two pieces. The first one is a label distribution protocol, which we handle in LDPD. And that passes along the MPLS labels that are necessary to build up your network. Each hop along has its own labels and builds up a small label database. It works conceptually similar to OSPF for those who are familiar with the online protocol. If you just kind of squint it does. A lot of the code was copied in the implementation. The code was copied from OSPFD and then heavily modified to handle the LDP protocol. And then in conjunction with that, BGP is utilized to distribute the end customer networks over the LDP network. You see me that is built up on top of that. Unfortunately, I don't have the time to talk about MPLS networks in details. Claudio Yecker gave a terrific presentation at UBCCon in 2011. I strongly recommend that you read that paper that goes into all the great, glorious details. Gives a fantastic network diagram that he used for testing and all the configurations that you need to get running. I don't know if there's video of that. If there is, I recommend it. If there isn't, I'm sorry. So best practices for setting this up. Again, as I said, default routes, default routes, default routes. In my experience, it was well over 60%, well over 70%. I would say a huge amount of my problems with this went away as soon as I started doing a default route as soon as I set up my first IP address within the routing domain. Even if it's just a simple reject route, and then I do the real routing on top of that later, having the valid route for the destination is the most critical part of this. It will save a huge amount of time for you. Pay attention to what's available within pf.conf. Pf is really powerful and has a lot of information and has a lot of options in it. And it can get very complex, but the complexity allows you to do it. There's a lot of things that you can do, so you need to be able to enumerate all of them. And I recommend that you spend extra time when you're planning out any network involving routing domains or writing tables. It is not as intuitive as a lot of people would think it is. And that is a different way of thinking about your networking. For those of you who do run networks already, you probably can remember that when you first started off, you spent a lot of extra time trying to understand how the traffic was being sent around. And you'll have a slightly shallower learning curve, but there'll still be a little bit of a learning curve as you get used to how this all works. So just simply plan ahead, do all the good diagrams that you can, and that will help you out a lot later when you're trying to debug the network so you can remind yourself just what it was that you were trying to do. Need to give some thanks. First off, the Henning Brower from OpenBSD who wrote the original multiple routing table support. He did it specifically to support the Pulse-EDS routing that I mentioned early. Claudio Yecker, he actually did the implementation, did huge amounts of this. And he was able to translate all the interesting Cisco documentation about this into something that I could understand. He also dealt with a lot of my questions when we're getting up. Ryk Flotor spent a lot of time and effort more on getting this to be available for us. He's able to get a lot of the funding to be taken care of, and he's able to get this into OpenBSD via the assets from this company. So are there any more questions? OK, thank you very much.
OpenBSD has supported routing domains (aka VRF-lite) since 4.6, released in 2009. In 2014, OpenBSD 5.5 gained support for IPv6 routing domains. At its most basic, routing domains are simply multiple routing tables in the same kernel. While seeming like a simple task, there are many gotchas involved in using routing domains in a production network. This talk will give a brief history, as well as some scenarios for why and how you would use routing domains, while describing several of the issues that came up during the initial deployments. Routing domains allows (for example) an airport to radically simplify their physical network configuration, saving costs and configuration overhead. A small demonstration network will be used to illustrate common and uncommon use cases.
10.5446/18643 (DOI)
Ik wil jullie welkomst de eindmiddel van de APMS-conference. Ik hoop dat jullie deze afgelopen praten genomen. De eerste wordt door professor Toshihiro Tsuchiyama van Kiyoshi University, de titel Soft Particles. Ja, dank u, chairman. Mijn naam is Tsuchiyama van Kiyoshi University. En de co-workers hier zijn er. Ik wil particulary dank u, professor Murayama van Virginia Tech, die me voor het TM-observatie helpt. In de titel van mijn afgelopen is Soft Particles. Dat betekent dat het met een lage modulus met een hoogste schade is. Ik wil hierover over plastic, deffermation en mechanische destoelingen. Of een straindige destoelingen van kapot-prispiteits in stil. Want het kan de deffermabiliteit van een hoogste stilstil vervuren. Eerst zou ik de term hetero structure ontdekken. De conventional hetero structure in mijn afgelopen, betekent dat de micro structure erop is, de hoge tweede fase particles, zoals carbide, multensite en zo. Nu imaginen we hoe de tweede fase er in de verheerlijke matrix is. De straing is gegeven tot deze materiaal. In dat geval kan de destoelingen in de verheerlijke fase vervuren, maar het kan niet door de hoogste particles vervuren en op de interface vervuren. De stressconcentratie zou dus op de interface gebeuren door de discontinuele plastic-defermatie. Aan hetzelfde, de macroscopische stresspartitionen zou ook gebeuren tussen de fase en de resultaten. De werkenhardeningen van deze materiaal zou worden vervullerd. Dit leeft om een vergelijking van uniformer vervulling. De vervulling van plastic-defermatie is vervullend tot een hoogste vervulling. Maar de stressconcentratie bevindt de vorming van de microfoon op de interface, dus het is een betere fact. De ductile fractie zou worden vervullerd en de lokale vervulling is vervullend als we de conventional hetero structure gebruiken. Dus laten we het over de mechanische vervulling van de vervulling van de steelseer. De andere figuur showed de relatie tussen de tenserste stroom en de vervulling en de lage stroom en dat tussen de stroom en de hele vervulling. Zoals je weet, is de hele vervulling een van de belangrijk vervulling mode in het vervulling. Dat is gekregen tot een lokale vervulling. Dus nu kijken we naar de vervulling van de strippeste en diepe steel die de typische hetero structuren hebben om de harde multie te vervullen. De vervulling van deze materiaal is heel hoog dan de andere steelseer, die door de grote uniforme vervulling is. Maar de hele vervulling van de steel is heel hoog. Het kan door de poore lokale vervulling uit de stroom concentratie op de harde tweede fase. Aan de andere kant, de homostructuren, zoals de benedictieke stroom, heeft een relatief hoge vervulling, maar de hele vervulling is heel hoog. Dit resultaat indikert dat de vervulling en de hele vervulling in de vervulling in de vervulling van de steel, zoals de stroom en de vervulling. Maar we willen de bevulling van beide vervulling aan hetzelfde doen. Het is moeilijk, maar ik denk dat het realisable is om de fine, softe partijen, zoals de kapper, te gebruiken. Zoals je weet, de perspetering van de kapper koopt een perspetering van de steel. Dit betekent dat de kapper partijen sterk interactief met de vervulling en de vervulling in de stroom- en vervulling in de stroom-vervulling is. Dit is een similaat situatie. Dit is similaat als een vervulling van een conventional hetero structuur. Maar dit is in de eindstijdseze van de vervulling. Maar in de hoogste regionen is de kapper partij plastisch geformd en ervan is het zo. Het is gevest dat de plastische vervulling van de kapper partijen vervulling in de stroom-vervulling is, zoals de interface. En ook de signifieke, morphologische vervulling en soms de vervulling kan een vervulling van een vervulling van de stroom-vervulling vervulling vervulling. Dit is similaat als de gevolg van de vervulling. Door zo'n vervulling van de vervulling van de stroom-vervulling, zoals het hetero structuur vervulling, de micro-boodgeforming zouden bevulling of vervulling en we kunnen verwachten dat beide uniforme en loge vervulling wordt geïnteresseerd en we kunnen een hoge vervulling met een excelente vervulling. Dit is mijn vervulling, mijn avondige vervulling. En er is wat experimentale vervulling voor mijn idee. Dit is een vervulling van vervulling in de kapper vervulling van de stroom-vervulling die in het Japanische Nationaal project van nanometaltechnologie-project was gegeven. Zoals je kunt zien, zijn mechanische vervulling van kapper vervulling van de stroom-vervulling, die reddekse circuleerde, een beter vervulling van de stroom-vervulling dan de kapper-vulling van de stroom-vervulling. In addition, de stroom-vervulling van de stroom-vervulling van de stroom-vervulling van de stroom-vervulling van de stroom-vervulling van de stroom-vervulling is characteriseerd door een groot lokale erongeving en reactie van een gebied die met een harde stroom-vervulling is. Dus van dit resultaat, van dit resultaat, ik dacht dat het vervulling van de stroom-vervulling effectief is voor geen stroomvervulling, maar ook om de ductiliteit te houden, en vooral een lokale erongeving. In order mijn idee te demonstreren, zetten we het op de verschillende targetten. De eind van deze studies is om een bepaald principer van een soft particle dispersion te ontdekken. Maar vandaag zou ik focussen op de eerste om directe evidens van de mechanische solitie van de kapperparticle door de plasticse informatie te maken. Ik zou ook willen ooit een beetje over de solitie van de mechanische solitie kunnen zeggen. Ik heb een gevoel om de details van de spasmine te kunnen stellen. We gebruiken de meeste van de iron 2% kapper ferritic steel. In additione, als de harde dispersion stroomvervulling is, gebruiken we de vc steel, dat is iron 0.2% kapper, 0.9% van vanadium. De chemical compositie van deze steel is om dezelfde aantal verspiteidingen te maken. Dat betekent dat 1.4% volumpercent van de epsilon-kapper en de vc-carbide in deze materiaal gesproken moet verspiteiden. De kappersteel was subjecteerd naar een simpele solitie en de ageing. De ageing temperatuur was op 873 kering. De vc steel was vooral subjecteerd om de kering te verspiteiden in de vc-carbide in de martinside matricen. De spasmine was vergeten tot de austenite en de carbide regionen volgens van vanadium te vervullen om de diffusionele ferritic transformatie te maken. Over de ageing, de harde en de structuren van de kapperparticle is veranderd. De kapper is eerst verspiteerd als de vcc-structuren van de partijen, maar die verversturen tot 9R, 3R en de eindelijk stabele vcc-structuren, de epsilon-kapper. Op het industriële punt van de beeld, de p-quadriebe materiaal is de belangrijkste. Maar in deze studies hebben we overgevend materiaal gehouden. We hebben overgevend spasmine omdat van het simpel verlenk ongeveer de transformatie en de deel van de observatie. Door de TM kunnen we deze past partijen beoordelen. Deze is de epsilon-kapperparticle en deze is de vcc-carbide partijen. Zoals je kunt zien, hebben ze een simele morphologie, de verkeerde en de verkeerde distributie. De meeste partijen van de kapperparticle zijn 35 nanometer en die van de vcc-steel is 37 nanometer, bijna simpel. De vierde spasmine is de vergelijking vervolgd. De vergelijking is de vergelijking als een functie van een vergelijking. De resultaat van de afgelijking is ook gevoeligd. We kunnen zien dat de hoogheid van de kappersteel en de vcc-steel is hoger dan de afgelijking door de dispersion vergelijking van de kapperparticle en de vcc-carbide. De vergelijking effect is er even na de zee afgevallen. Hier, om de effect van de dispersion vergelijking te vergelijken, was de hoogheid vergelijking als een functie van de vergelijking van de vergelijking vergelijking. We konden een straat vergelijking, de zoekomst, de vergelijking van de vergelijking. In de IF-steel zou de hoogheid vergelijking omgeven om alleen de vergelijking te vergelijken omdat er geen versprekende partijen zijn. In de vcc-steel is de vergelijking bijna vergelijking bij de IF-steel. Dit betekent dat de dispersion vergelijking van de vcc-carbide is simpel aangepakt met de vergelijking van de vergelijking. Het is ook moeilijk om de interactie tussen de vergelijking en de dispersion, de vergelijking van de dispersion. Maar de belangrijkste punt van deze figuur is de z relatiefmoelheid van kapotsteel. De vergelijking van de dispersion is bijna simpel als. Op je wc-carbide. De zict Vaas en de volom stata is heel extrar Bishop van de Demontorious geek van de koudige tuine Phase, zoals dit. Dit betekent dat de vergelijking met de dispersion door de kapotsteel opvangen met z Violet normaal of beschouwbaar of consenteelheid. Dus laten we zien de internale structuur van de kappa particles. Dit is een TEM-image van Y kappa particle in overigd, als agele spasme, voordat het een groeit is. Dus ook een klein aantal metastafelijke 9R structuur was ook verkeerd, maar bijna alle particles waren Y fcc structuur, Y kappa particles, zoals dit. Dus na de groeit leren, konden we veel verschillende structuur bevinden. Dit is een voorbeeld van 5% groeit, groeit, groeit spasme. Zoals je kunt zien, zijn er veel verkeerdere tweens in de kappa particles. En ook de interfes van de particles is irregular verkeerd, zoals dit. Dus dit bevindingsresultaat indikt dat het kappa particle zelf gevolgd is om 5% groeit te verkeer. En na 70% groeit groeit, de morphologie van het kappa particle is helemaal verkeerd en er verkeerd is naar de groeit-directie zoals dit. Maar het is interessant dat het aspect ratio van het kappa particle niet constant is. Sommige particles zijn verkeerd en hebben een grote aspect ratio, maar sommige andere hebben een klein aspect ratio. We hebben nog niet verkeerd, maar ik denk dat de verkeerdere particles op de crystallografische orientatie van het kappa particle is. Dit is een magnifteerde kappa particle in 80% groeit-directie. De interfes is hier. En van het verkeer van het automaat, kunnen we deze locatie vinden hier. We kunnen zien een celera-contrast in de kappa particle. Dit suggestt dat een signifieke aantal latistrains in de groeit-directie is verkeerd. Dit is een deel van het kappa particle. Hier is het interfes, maar het wordt ongeveer aan het kappa particle. Ik dacht dat deze kappa particle verkeerd is in de verkeerdere particles verkeerd. We hebben de chemische compositoring van elementen bezoekt door de EDS over deze lijn. Dit is een kappa particle en hier is het tip. Zoals je kunt zien, de concentratie van de ijren en de kappa is gradelijk verkeerd over deze lijn. En het moet behoorlijk zijn dat de concentratie van de kappa verkeerd is. Ik denk dat het door de solutie van de kappa hier is. Ofteblieft, we hebben meer analiseren nodig. Voor de macroscopische verkeerde, maakken we de latisparameter van de koude licht- en kappa particle en de licht-zichting en protecten het als functie van de reductie door te leren. In de licht- en kappa particle is het nooit gegeven na de licht- en kappa particle. Maar dat van de kappa particle tende te zijn slechtig, slechtig te zijn. En in de 70% koude licht- en licht-zichting van de kappa particle is het 0.0005 nanometer. Je kunt zeggen dat het te klein is, maar van de carburantie lijn, carburantie lijn, de licht- en concentratie van de solitair kappa, dat deze een klein increment vervolgens de vervolgens de solitair kappa van 0.6%. Dus, omdat de inzichtige vermonte van de precipitatie kappa was van 1.6%, dus deze waarde vervolgens een eenaar eenaar van de precipitatie. Dus in het verkeerde, als de kappa verdedigd was, dus de verkeerde jaak dringpartie sch Então om oputtering te gaan verd blossoms van kappa via de PSC. Dit is risijtje van uit want de ie geriteerd materiale de tabels importance kangever Nept modeler gcomes. Dus we kunnen zien dat een doel tänker is 25 400 in Ole met ultra Potiese. Dit en WeetCRI veranderen met de grootte van de Y-kappa of de transformatie van de kappa van Nijl naar FCC. Maar de hoge temperatuurpeek op 266°C is alleen in de kolderold materiaal bevindt. Dit beeld is geloven om een formation van de kappa-klaas. Dit resultaat betekent dat de precipitatie in de kolderold materiaal is. In ieder geval is de vervolking van de kappa-klaas opgekomen door de kolderolding. 90% kolderolding. En eindelijk zou ik hierover over mijn opvulling over de mogelijkheid van de mechanische vervolking van de kappa-particle. Ik denk dat de vervolking van de vervolking van de vervolking van de kappa-particle door de kolderolding kan doorgaan. In dat geval is de atomisch gewicht van de kolderolding van de kolderolding. In dat geval is de kroner en de nieuwe kolderolding van de kolderolding een hoge energie zou hebben. De solubiliteit van de kappa zou lokale vervolking vervolkingen worden. Maar dit is gewoon de geval nodig. We moeten een ander facteur voorzetten. Dat is de dynamische vervolking die door de kappa-atomus van de kappa-atomus opgekomen is. Maar dit is gewoon een idee. We moeten dus de distributie van de kappa-atomus vervolkingen. We zijn nu aan het considereren om de 3D atomisch proof te gebruiken. Nu starten we de collaborative study met de c-companies. In addition, ik denk dat de calculatiemethod ook belangrijk is. Bijvoorbeeld, de MD simulation is een van de krachtige toolen. Ik moet dus met Prof. Muneto collaboreren. Dit is de conclusie. Dank u wel. Dank u wel, professor Tugyama, voor dit interesse. Nu zijn we open voor discussie. Dank u wel, interessant. Bij de hoge resoluciële imagens, in de vormmateriaal. Waarom niet de interesse opgekomen zijn? Je zei dat dit een bluurdere interesse is. Ik zei dat het een irregulariteit is. Ik denk dat we dat punt moeten bezoeken. Maar zo verga ik niet de details structuren. Wat is het objectie? Het thickness van je sample. Het thickness? Ja, het tiem sample. Ik ben ervan dat dit bezoek was gedaan door Prof. Murayama. Ik ben ervan dat het details experimenten zijn. Dank u wel. De vraag is terug. Ik heb een comment over je mechanische verkeerdering. Dit is wat gebeurt in marraige steel- en fatig-assessment. Je hebt een verkeerdering door het verkeerdering van deze locaties. Ik hoop dat je het ook hier ziet. Dank u wel. Er zijn veel bezoekers op de verkeerdering. Sommige in de aluminium-mallet. We hebben er wat paperen gecheckt, maar ik weet niet dat. Sorry? Ik ben ervan dat het niet bezoek is. Niet bezoek. Nog een vraag? Zal ik je verkeerdering door de verkeerdering of de verkeerdering? We hebben de verkeerdering geleerd. We moeten de vorm van de verkeerdering zien. We hebben de verkeerdering geleerd. Het is een normaal gegevene verkeerdering. We moeten ook de interface effect zien. Ik wil je even kijken. Als je de nanosize particle in uw experiment studieert, kan je de verkeerdering van de verkeerdering worden geleerd? Sorry. Want je verkeerdering is origineerder. De verkeerdering? En als je de verkeerdering wordt geleerd. Ik wil je eens vragen of je een kleine nanosize particle gebruikt. Een nanosize particle? In dit studie gebruiken we overige specie, overige particles. Maar in het geval van verkeerdering, de verkeerdering was nanosize, misschien 10 nanometer of minder. Ik ben interesse in de vormstructuur van verkeerdering. We beginnen nu de investigatie van verkeerdering. Ik vervond dat de werksofeningen in de verkeerdering van verkeerdering zijn. Dit betekent dat de verkeerdering in de nanostructuur en nanoparticles zijn. In dit geval zien we de coherent-boundary. Ik vraag me wel of de verkeerdering meer verschillend is voor deze twee verkeerderingen. In het geval van verkeerdering, de nanostructuur en de particle hebben de bcc-structuur. Want je hebt een verschillende chemical-composite voor de verschillende en de grote. Ja, de chemical-composite en de structuur zijn verschillend in het geval van verkeerdering. In het geval van bcc-structuur, de drie voorkanten zijn verkeerdering. Ik denk dat de verkeerdering is makkelijk geoccurred en de verkeerdering is makkelijk geoccurred. Ik denk dat het makkelijk is. Welke andere vragen? Je showed de DSC-verkeerdering, waarin je zei dat de verkeerdering weer is. Maar als je het over de verkeerdering koopt, zal het niet helemaal vergelopen zijn. Ja, helemaal vergelopen. Er zijn geen traces van verkeerdering. Is het er helemaal vergelopen? Nee, een deel van verkeerdering is vergelopen. In dit geval. Maar een deel van verkeerdering kan een deel van verkeerdering koop. Het zal verkeerdering worden, het zal op de pre-existing verkeerdering worden, maar dan worden er pressen en klusters. Dus als het verkeerdering is... Het verkeerdering op de pre-existing verkeerdering, maar dan worden er pressen en klusters. Het verkeerdering gebeurt in de... Het is ervan dat het de koperparticle eruit is. Het verkeerdering is er, maar het verkeerdering is er. Het verkeerdering zal eruit worden, maar dan worden er pressen en klusters. Sorry, ik kan het niet horen. Het verkeerdering op de luchtverkeerdering op de luchtverkeerdering, dus je betekent dat het effect van het verkeerdering op de struik van de koper... Ik ben gezegd dat het op de pre-existing verkeerdering op de luchtverkeerdering zal worden. O, dan zijn er pressen en klusters. O, dan zijn er pressen en klusters. Hij vraagt of het verkeerdering op de pre-existing verkeerdering op de luchtverkeerdering zal worden. En niet op de nieuwe klusters. Oké, oké. De luchtverkeerdering op de luchtverkeerdering. Ik heb niet de luchtverkeerdering gecheckt. Maar ik denk dat er veel verkeerdering op de verkeerdering is. Dus ik denk dat de kluster op de luchtverkeerdering worden geformd op de luchtverkeerdering, niet op de onverkeerde koperparticles. Oké. Is er nog een vraag? Ik heb twee vragen. Het eerste is hoe het verkeerdering op de luchtverkeerdering geformd op de koperparticles is. Ik ben vergeten over de akeleze. Want dit is een verkeerdering op de koperparticles. Dat is... Nee, nee. We hebben de luchtverkeerdering niet gemeten. We hebben de luchtverkeerdering gemeten op de verkeerdering. Verkeerdering. De koper gaat naar de verkeerdering. De luchtverkeerdering is gechange. Doe de verschillende atomische koperparticles. Maar het is nog steeds geformd op de koperparticles. Ja. Ja. Ja. Oké. Een andere vraag is... Dit is een vraag van de vorige vraag. Als de koperpulverkeerdering geïnteresseerd is, dan kan de preferentieelijke nucleaties voor de pre-existing epsilon koperpulverkeerdering zijn. Oh, echt? Ja. De koperpulverkeerdering. Je betekent dat de koperpulverkeerdering de nucleaties van de koperpulverkeerdering is? Ja. Misschien de preferentieelijke nucleaties. Ah, ja. Misschien zijn de koperpulverkeerderingen ook de nucleaties van de koperpulverkeerdering. Ja, maar de pre-existing epsilon koperpulverkeerdering kan de nucleaties van de koperpulverkeerdering zijn. Oké. Maar het is een vraag. Oké. Ik begrijp het met je opinion. Ja, dank u wel. Oké, ik denk dat we... Oh, er is een... We moeten er wel op moeten gaan en dan kunnen we deze discussie hebben later in private. Dus maarTAKE 2 NOUN OF JCC.
A lecture given by Toshihiro Tsuchiyama, at the Adventures in the Physical Metallurgy of Steels (APMS) conference held in Cambridge University. How particles such as copper, which can be penetrated by dislocations in the ferrite, influence the properties. The second phases in steel, such as carbide, oxide, martensite, and so on, are usually used for enhancing work hardening to prevent the plastic instability during deformation. These hard structures are effective for increasing uniform elongation, but conversely, it tends to deteriorate the local elongation and reduction of area. To improve both uniform and local deformabilities, it would be desirable that the work hardening is enhanced by dispersed second phase in the initial stage of deformation, and then it disappears or becomes invalid in the higher strain region, leading to work softening. Authors believe that soft particles is one of possibilities to exhibit such a functional change and call it hetero-to-homo structural transition. In this report, the effect of soft Cu precipitates on tensile deformation behaviour of ferritic steel will be compared with that of hard VC carbide precipitates. In addition, the plastic deformation and mechanical dissolution behaviour of Cu particles by severe cold rolling will be demonstrated.
10.5446/18638 (DOI)
The next speaker is Professor Miany Fabio from University of Udini. His topic is Micano Chemistry. So I need also this device. And this is for moving. Hope it works otherwise. So good afternoon ladies and gentlemen. I'm very happy to be here and not only excited because I'm hosted by Professor Badesha, but I also must thank all the staff. So when I sent emails around 10 o'clock in the evening, I had an answer in three hours basically in the night. So it was fantastic. There is a lot of young guys that's working here. And I suggest that everybody appreciate this very hard work. So not only the big professor, but all the young guys that are doing a very, very good work. This is my own opinion. I think it must be shared by everyone here. Okay. This is one point. The other point is I will take your attention for, let's say, I wouldn't say this is the focus of the conference. But I think that most of you know that there is not just one metastable iron carbide. There are several iron carbides which are interesting in steel metallurgy. And I think one of the simplest technique to obtain them, it is the old story. Now it is rather old, yes, of ball milling. So ball milling, you can call it mechanical alloying. You can call it mechanochemical synthesis. It has been known since years. Basically the American guys, they went on by the work of Benjamin and you know all the oxide dispersion strengthen. Yesterday we have been hearing about oxide dispersion strengthening, but basically the Russian school, it was a little bit more on especially the civilian branch of the academia of science, but you have but you have many other guys, I would say, Yeltsukov in the recent year. There were more, say, this special branch of science. So I think that I am not on one side or the other. Actually I am a professor of steel making, but I am interested in this technique since some 20 years. So there is a lot of experimentation and even recent more work. I think there is one paper on acta metallurgica and materialia I think a few months ago, not by myself but a German group. So I think the broad topic would be worth. Why? To be studied again because one point is that we know most of the things about iron carbide which is cementite. I think that we should improve our knowledge about other carbides, for instance epsilon carbide that you know as an important role when you are considering martensite and you are considering thermal cycle. I would say that also, heck carbide which is F5C2, it is another important carbide. So this is just a suggestion because of course we have been hearing Professor Paxton just, I think, yesterday or two days ago and he was speaking about density functional theory which I am not at all an expert. Please do not ask me any question about that. But there are also recent results by the group of the Smith and Chiquini I think you know about different carbides. So I think that having some practical information by a very, very simple technique, you know that ball millings, if you have pensions you can even do it by your hands. You get some device possibly made at least by steel, some steel balls, you can even go building steel balls and then you put some elemental powders inside and then you go on milling possibly not with your hands but maybe with a small laboratory device, maybe a specs mill which is very popular, even the Fritz Pulverizet mill is very popular. You can build your own device like I did some time ago. I think I will build another device soon with my students. So let us go on with the story. So basically you can, is that right? Okay. This was what I was telling you about and I will present here basically so mechanochemical synthesis if you want to call it ball milling, okay no problems but the point is that between the alloy and compound like herbide there is something which is not black, not white and so you should consider the whole story. So as for characterization I have been using here most power spectroscopy. Normally I don't say the most power spectroscopy, that should be correctly do because I will just say most power you won't bother about it. So I will propose a kinetic approach to mechanochemical synthesis. So very, very simple ideas because when you go to compare experimental results it is nice to compare one device for the other. So basically to see which is your own action you know with the milling time which is a way of imparting energy to your elemental powders. You are not restricted to elemental powders. You can nearly ball mill anything if you want ball mill I don't know salt and pepper you can do it but maybe this is more interesting for our topics here. And so I think that well there is a wealth of results on ball milling of iron, carbon, alloys. The whole work was started by Paolo Matteazzi and Gerard Lecair. I think it was nearly 20 years ago and I've been working with him. By the time I was a PhD student and so we did together some work. I think that Gerard Lecair is very good at most power spectroscopy. I am just an applied guy so I can interpret use it and make some experimentation. Paolo did some work about mechanochemical synthesis and synthesis of nano crystal line material. I think it is still involved in some industrial spin-offs and so on of using nano phase powder by this technique. I am more on the university side. So what it is rather new for this presentation is you know that you have very small carbon content and you can obtain very useful alloys which are known as steels. So what I have the pleasure to present you here, it is some discussion about very, very high carbon content materials and I am trying to understand what's getting out from them. From my opinion and impression what I wrote and what I was looking in the literature, I think these results are quite interesting and in a way worth to be studied. On the other side I would point out that there is, I would say for this specific field, there is a tradition that starts from Senator Lecair Dubois which is the French school of most power spectroscopy. They have been obtaining a lot of results and anyone in the fields considering a refined field distribution related to those resources. So this is most power. But the point is that you can characterize your powders because you obtain powders by means of x-ray diffraction and I think most of you is much more exer than I am in transmission electron microscopy. So the game as for me it is and the adventure by the way Professor Badesha is one. First of all how about studying different carbide stability like it has been done recently by the group working on the Fischer-Tropf synthesis. You know that carbides they play an important role for steels but they have an important role as well in a special chemical technology which is obtaining oil from gas and this is practically industrial done by Seysol which is a very big South African group. So I think there is a lot of interest in this technology even because it's not just carbon that you can put into oil but some you know this some so-called green material that you can also carbonize and put into oil. So this is a topic rather hot for this kind of guys but so they have done very nice and recent work. I think that something more could be done by let's say the steel community. So let us go on so this is anything I was mentioning to you. So you get some balls you get a jar and you shake it basically around 20 years you can be as small as a glass it can be bigger you can produce powders in the range from some grams typically a specks mill it is producing 3 or 6 grams so you can make some experiments on that you can produce kilograms without with a scaling up it is rather difficult to produce more than a lot of kilograms with a device but one never knows about that. So this is quite well known so I am putting inside the elemental powders iron and graphite and I am obtaining some carbides. So this is a scale up so this is a little bit bigger if you want to these are maybe it is not just one ball normally there are some 20 balls inside that so you can just vibrate it again around 20 years you have the speed of the balls around some meters of course if you are increasing your or changing your amplitude you will obtain basically different results but normally as the specks mill is a sort of paradigm for the small laboratory experiments this has been done scaling up the things. Things like the velocity in that range so from 2 up to 5 meter on a second which is enough to make yes another slide so we did also some other things the big issue which I have not solved yet it is to have a control of the temperature inside the vial so this was the idea to make a prototype vial so to make some internal conformal channels so that you can control the temperature inside I don't think this was really successful so the topic is that when you have this reaction locally then you can monitor you will have a look this is obtained by the experimental mill but anything else could be so please just consider this vpr what does it mean ball to powder ratio so it is important to consider the weight of your balls and the weight of the material you are acting on because basically it substituted in a way the time of milling so if you are just going with this ball to powder ratio around 18 like that and you go 10 hours milling then it would be that if you are using vpr even higher than you can mill for shorter periods so this is just a spectrum from most bio spectroscopy I hope that most of you won't be offended but what you can extract from that it is quite easy to extract and hyperfine field distribution so from the previous data you can extract any hyperfine field distribution and about the model which is which was done basically by Lecaire around more than 25 years ago and that is working for any kind of iron carbide both in the field of catalysis and both in the fields of any kind that useful for the steel science and business activities so you can extract this hyperfine field distribution and well you got to have some ideas about so what's going on so you will assign we will not even speak about isomer shift quadrupole splitting but let's say that everything is easy and please do not be offended if I'm just simply find the thing like that so like that you assign because there has been a lot of work people you assign your hyperfine field and from the hyperfine field you can extract the relative area and on the relative area you can count your iron atom and extract them which is the percentage of what you obtained this is just a technique it is not the only you can use x-ray diffraction you can use as many analytical techniques that you want I think the real tough game for instance I have and I will show you afterwards I have still many diffraction data which with much more modern technique like the Realtwell technique and many algorithms your group is using actually you can extract information actually the whole thing is a little bit disorder because you have a lot of plastic deformation so you will have a lot of defects and you have the formation of the carbide so these are another from seen from another point of view the assignment of the hyperfine field distribution so once again you can put that one do you understand what I am saying maybe not so I will say it once again so you can extract the hyperfine field distribution from your spectra and then assign two different carbides in this case you will see for sure you will see some iron here which I think rather long because it is 330 so it should be put here so you can have some carbon but I am calling it martens city component I must be very careful because I don't want to speak a lot about that please consider that normally I have a lot of carbon around so I'm not considering small amounts of carbon like because I am working for this work with very very high carbon concentration so I am more focusing on carbon rich compounds like cementite like a carbide like epsilon carbide with here I think you have much more iron carbide that I have enumerated here because you can have different forms of epsilon carbide for instance this depends on structure and then additionally which is my current question you can have other higher carbon compounds which is what I am going to try to extract but I am not that sure so about the iron conversion ratio so this is the initial composition of the cementite so 75 atoms of iron and 25 atoms of carbon and this has been done extensively in very very different way so this is in a way how it is going the initial 100% non-converted atoms of iron going now with the time evolution so you see here the milling time so you see this is a traditional say for the technique 10 to 1 volt-to-powder ratio for three grams and you can see the different evolution so you have a loss of yes thank you so much because I've lost my watch so I must be so you can see the different evolution starting from different compositions I wouldn't say we could say a final word about that because as far so if I am speaking about talking about steel saying about I started from 95% atoms of carbon it is very very unbelievable but still the atoms are reacting in a different way I would say and forming different carbide for the for the moment I've identified basically the same carbide that you saw here so once that you have to obtain those results like that you can have all the this is what is the new presentation of this result so you can have some carbides I don't find any other different from my hyperfine field distribution but this must be discussed and investigated as well this is very very fine tool because if you use x-ray diffraction I don't know if I will present them here for for because it will not have the time but you will just see a very big broadening about carbon so you go see carbon carbon and it's very difficult to extract information from that so this is another comparison so these are standard results so you can obtain them other groups later on they have obtained similar results and it is quite safe to consider that what it is not so well known and maybe not well presented here it is the thermal stability of those carbide I think this is very interesting because once that you understand you have different carbides understanding which is the stability it is important of course these are very plain iron carbon carbides it not those complex carbide you may have in steels but still it is interesting and I think this is basically you know just one hour one hour annealing and then specific material you can see that the hyperfine field distribution is changing so these are the most very well and so here you see the different influence of different temperatures I think this would be very interesting to study and to explore in different situations and positions even because we know something about cementite and the thermodynamic properties of cementite that is very nice where by German people but I don't think we had a lot of work about the thermodynamic properties of other carbides which may be useful not only for this field but because if you know want to use the calfad approach you need to have some data about even let us table the phase so about my conclusion mechanochemical synthesis it is a very simple and elementary technique for synthesizing iron carbides I think it is worth to be considered by the community of physical metallurgy of steels and I think that this topic is a prospectively interesting for people in the world of density functional simulations and I think also that this would provide some additional data that could be useful even maybe not key but for the whole physical metallurgy of steel the community thank you for your attention so we are open for questions Harry so actually I think the first principles people have already published the thermodynamic properties of many of these carbides there should be in the literature yes I have sent to your group some this this myth and she queenie group it is very excellent work with from free first principle this is not the only one I think professor paxson will know a lot more but this is one you know in the past we've been we're relying a lot on the medium approach and for some experimental estimation would semi empirical estimation of the end of this formation now we have this more fantastic game about density functional theory and I think that you know having both data I mean experiments of course these are these orders but making some money I think it would be interesting and also in a way exciting the other topic is that maybe is for younger guys that I am so you have all these these well approach and I have a lot of data and I can provide you even now if you want the old the fraction data so we can put together and to study together the phase evolution so there is a lot of experimental work done by myself over the year so it is just to put all the things together because when you are using very short million time let's say 15 minutes 30 minutes in the iron carbon system at the initial composition of a cementite you already have some alloy formation so you have all the games that you are playing usually so you have the formation of a high carbon alloy then you can have some some tetragonality and so on but the point is that to put together the x-ray diffraction precise in a precise way and also the most power data in the precise way this I think a game should be very nice to be played again thank you for your presentation but as you had no time during your report for presentation x-ray data maybe now you can say several lots about this result and compare x-ray results and principle results I think the whole game it is to be played I mean I have some ideas but the good thing would be can I do it or can so I just put them here but so I can I can I have a lot of x-ray diffraction data I want to be more precise because all the work that I've been doing over the last year it was just considering of course x-ray line broadening of course all these carbides are nano and you can extract the information from that you can use a very simple approaches but now that we have more sophisticated software I think the one game should be you have most power phase analysis you have basically the lecair assignment of a fine field distribution and then go again and go more analytical because if you have four or five phases it's with disorder well it's a tough game but if we have data and I have data we can play the game together because it's already I have sent already to the to the guys but maybe I sent an email at two o'clock in the afternoon so they have not got it yet but they are on the end they will be available on the website it's some matlab programming and you have already the data and you already have all I think there are at least 15 different time composition the x-ray diffraction so they are open and I will make available also the most very well most power data and he per fine field distribution as well so we got to connect them one another and this is a rather intervention in my mind. Hyperfan distribution is a very important thing but x-ray data information about crystallography and crystal structure it's more important information about phase not only magnetic hyperfan field. Well you know I think that in the work that I've been done over the years this was you know normally if I had this even on the x-ray data I wouldn't have had 20 minutes for my presentation okay and in any case I am interested because I want to check whether some other metastable carbides are formed or not so for this high carbon system I want to be deeper inside the investigation of course I've already x-ray data but I want to do on the other with less carbon system I can tell you x-ray work has been done so so many times that it is not a problem okay and that which confirms perfectly the hyperfan field distribution okay but the point is that I want to go more quantitative than I was in the past this is the point okay so your observation is welcome. I think we have a question online. Yes, were any different for example hyperfan field distribution not between mechanical chemical synthesized carbide and those precipitated in steel? Well you know in my mind these those systems if you are considered just the Esmelder powder they are much more disorder as compared to steels so if you are going up with the temperature now you can have let's say more standard and less standard and the hyperfan field distribution but this might be compared with some steels but I think the game is properly interesting for this part of I think that for steels there are also other more like metallography and so on so much more important techniques okay. Is there any other questions? How do you find for martensite during wall building is there any temperature rise? Well you find the martensite because you can have from x-ray data you have an A axis and then you have a C axis and you can even it is not my work but it is the work of German people you can even extract the percent by the the normal robots or the like position about the A on C reaction that you can extract the carbon composition so I did and independently quite a lot of other people has done and recently this Akta metallurgical material work. But the wall building is done at room temperature when the powder is ferrite. Yes basically it's a poor iron with some this is commercial powder so it will have some impurity like you can use even better powders because but this is just water atomized and then reduced powder the one that you are using currently for sintering application so yes you find martensite but the focus here is not on martensite because you have this but we can we can discuss this topic and I suggest that if you want I can give the link and we can consider and read together this recent paper that has been published now. Thank you. All right do you have any more questions? If not let's thank it again to Professor Morro Start pat 2009.
A lecture given by by Fabio Miani, at the Adventures in the Physical Metallurgy of Steels (APMS) conference held in Cambridge University. On the synthesis of various carbides of iron, beginning with elemental powders. Mechanochemical synthesis by simple milling devices has proved to be an efficient experimental tool for the synthesis of nanosized iron carbides. Along with the milling action, which is basically affected by the specific chemistry, ball to powder ratio and milling times, simple low temperature thermal cycles are effective to stabilize structure and promote/dissolve specific phases. Some collected data by transmission Mössbauer spectroscopy will be presented, along with some considerations - by means of the analysis of the hyperfine field distribution - that will be proposed on the kinetics of the formation of iron carbides at the initial atomic composition Fe 5% C 95 %, which - to the knowledge of the authors has not been discussed before.
10.5446/18636 (DOI)
Thank you very much. So the next talk is going to be given by Igor Abrikosov, who originally is from Russia, from Mises, which is now called the National University of Science and Technology, but now works in Sweden. And you know, magnetic properties of steels are the essence of steels, and there are so many unsolved problems in the magnetic properties of steel, one of which he has solved. So there's a long way to go. It's a long way to go. Yes. Thank you very much, Hari. So first of all, I would like to thank you for inviting me here. So you told us that you had no problems of generating support for your conference, so basically everybody said yes within a minute. I don't know how difficult it was for you to assemble the audience, but at least in my case, it didn't take more than a minute to convince me to come here when you explain the idea. Thank you very much. Fantastic idea. My presentation is about magnetic properties of steel, and I am doing theory. I am actually doing quantum mechanical calculations. This work was done in a collaboration with Markus Eckholm from Linköping University and Alena Panamareola from Moscow Institute of Steel and Alloys. But of course, I would like to acknowledge our people who were involved in this research, Andrei Rubin, Professor Garnostarov, Dubrovinsky-Dubrovinsky, Levento Vitesh, Yana Valenius and Per Olsen. And I would like to thank my sponsor, the Swedish Foundation for Strategic Research, the Swedish Research Council, and I am doing computation. This is my main tool, so I would like to acknowledge Swedish National Infrastructure for Computing. Let me give a true talk of my presentation, which I submitted to this conference before it was severely cut. And I will be given introduction on Abinishu Simulation of Iron-Based Alloys. I will show that magnetostructural coupling gives a new possibility for steel design. At least this would be one of my provocative statements that is one of the subjects for these conferences. I will try to illustrate the universality of theoretical methods by giving different examples. I will show calculation of mixed-in-anthalipysofironchromium alloys, multi-component ironchromium alloys. I will talk about studies of carbon nitrogen, vanadium and neobium and carbon nitride precipitations in austenite. I will show how magnetism influenced phase transition of iron nickel-free thermalloy. And I will also give an example when we were able to synthesize new material by tuning its magnetic states, by synthesizing B2 phase of iron II silicon. I'll hopefully have time to make a conclusion. The main task for me is to convince this community that theory is a useful thing. The main goal which we have is actually to shorten the time for material design. I think that by using theory we have a great possibility of shortening this time as much as by factor of two. You probably saw these types of graphs many times because if I wanted to do material design, the type of presentations which were given in the previous talk, for example, I want to design steels for nuclear reactors, I obviously cannot do my quantum mechanical calculations at this scale. So the state of the art idea of involving theory is the so-called multiscale approach where I do quantum mechanical calculations for several hundreds of atoms and determine the relevant parameters like energy, elastic properties, and teratomic interaction, and provide it for high-level models like thermodynamic or statistical models where we can simulate phase stability, calculate free energies, and provide further information for even higher level continuous model like phase field simulations where the microstructure can be simulated, et cetera, et cetera up to engineering level where parameters and knowledge generated before is used. And by the way, I just want to point out that my area of research is at this blue arrow which is put here. So I'm working in quantum mechanical calculations and statistical mechanics simulations. This scheme is state of the art, but it's a little bit outdated. What we have realized now that it takes too long time to go with chain because it reflects the traditional idea and traditional vision of material design. Basically you have a property which you need to optimize and you have an idea for the material that you want to create and you test this material in addition to experiment. You can nowadays do theory, but you test this material. You see if it works or not. Most probably it doesn't work when you have to go back and to reiterate and to reiterate and to reiterate this process. The new ways of using theory are given by the so-called high-throughput type of computations where you are able to identify the level of the parameter for your material and then scan theoretically or experimentally all available materials pretty quickly because our codes are now pretty fast to find the best material which optimizes this parameter. The main problem with this approach is that we have to find a good set of parameters to optimize or the so-called descriptor. Not all the parameters can be calculated explicitly. I'll show you for example that a young model or a share model can be calculated very reliably, but hardness, toughness and other things we cannot do too much at this point directly. Even more promising way which we are working now is given by setting up a computational databases. The high-throughput design on the traditional approach has another drawback. It requires the formation of teams between theoreticians and experimentalists. We have to work together so people who know the material, who know problem, have to find a theoretician who can do calculation, etc. The way how we start to think about using carbon issue theory is to create a computational databases of parameters which are relevant for material properties. And how nice could it be for you to Google the young modelers of iron, 10 percent chromium, 5 percent nickel, 3 percent cobalt. And to get these young modelers as a function of temperature, we calculated the quality comparable to experimental accuracy. So this opens a new way for material design because people who know how to design materials, engineers can use parameters calculated by experts in their particular work. So I would say that the goal for theoretical calculation now is to be able to predict and calculate relevant parameters at realistic condition and by realistic condition. I mean that in most calculations so far we use very approximate vision of material because our tool density functional theory and the density functional theory of course was awarded not a prize in 1998 so it's very powerful theory but it allows us strictly speaking to do calculation at zero temperature. So if I calculate elastic properties, most probably I calculate zero temperature elastic properties. If I put it in the database, how nice would be to use this zero temperature elastic constants in estimating the growth and phase transformations and the stresses which are in your material at 1500 Kelvin. So the task for me is now to learn how to calculate these reliable parameters at realistic conditions and the subject of this talk is partly related to these realistic conditions and to magnetism. So density functional theory was formulated in 1965 by Hohenberg and Kohn and Wenbenkoh by Kohn and Schem and as I said it's very powerful theory, a Nobel prize was awarded to it but I think it's 1978 is very important date for theoreticians because at this year Murutsi, Yanuk and Williams published their famous book and in this famous book there was a graph where they show that without any adjustable parameter we were able to accurately calculate atomic volume of Wigner's AIDS radius, cohesive energies and bulk modulus of 3D and 4D transition metals and achieve very nice agreement with experiment. In this graph I underlined a rectangle, I shade a rectangle which includes chromium, manganese, iron, cobalt and nickel. Despite the great success of the calculations by Murutsi, Yanuk and Williams for all other materials results for these elements are obviously not so nice and it was realized from the very beginning that the problem was that these are magnetic elements. All other elements are non-magnetic and theory gave very good results in comparison with experiment but for magnetic elements in this earlier calculation magnetism was not taken into account, the agreement was actually quite poor. And just to show you what we can do much better nowadays with magnetic materials, I show the graph and calculations which were published in this year in collaboration with Ahen group where they calculate young modulus of iron, manganese, alloys with chromium, cobalt, actually it was also nickel and copper in these elements and agreement between theoretical results and calculations which are shown here in open circles for young modulus is a function of composition of chromium and cobalt in iron manganese steels with iron to manganese rate of 2.3 and agreement with another indentation experiment done by our collaborators is actually very, very impressive. So there is a possibility to calculate relevant materials parameters with almost the accuracy as good as one can achieve in experiment. But once again to achieve this quality we need to use realistic condition and in this talk I will concentrate on magnetism. If we look at the body center at cubic iron, BCC iron, we know that it is ferromagnetic but so all magnetic moments in iron are pointed parallel to each other but this nice picture is actually present only at zero Kelvin. If you start to heat the sample, magnetic moments start to disorder first and above the curie temperature all magnetic moments are pointed in different direction. So we go from the ferromagnetic state to paramagnetic state and the difference between this arrangement and this arrangement was often ignored in theoretical simulations. I will show you during this talk that if you take into account this difference you can obtain much more reliable calculated parameters. Now this picture has further simplifications. Now we suffer from the fact that movies were not recommended but it's not only we, it's basically the entire subject suffers from the fact that movies are not recommended because in practice both atoms vibrate and magnetic moments do not sit in their constant positions but they move all the time. So these motions also need to be included in the simulations. I will not talk about it too much today but we have a certain simplification. So I'm going to talk about disordered magnetism. As I said we have paramagnetic state where all the magnetic moments are pointed in different directions and they change all the time. And the model which was extremely successful in describing this disordered magnetism is called the so-called disordered local moment where you substitute this live picture of moving magnetic moments by a static picture of disordered moments pointed up or down in a random fashion. And the model goes back to Harvard and Hassegawa, the rate of reticence and its practical implementation in the field was done by Balazs Jorfi, British physicist who passed away last year unfortunately. I am going to show the applications in calculating, first of all, consider iron chromium alloys. These are, of course, based for very many important industrial steels, particularly where used as cladding materials in the fast neutral reactors. And it is known that low chromium steels with up to 10% of chromium show a lot of nice properties. They are very stable and very good. But the origin of these nice properties of 10% chromium steels was not well understood. There was a lot of simulations carried out for these steels, but I would like to call your attention to the fact how theory can complement available experiment. So this is old figure from Heilgrinburg, which shows mixing enthalpy of iron chromium alloys. Actually it's still alpha phase, so it's BCC iron chromium alloys. And it shows that tendency to demixing, so the mixing enthalpy is positive all the way and it's pretty much symmetric. I would like to underline that this experiment was taken at 1600 Kelvin, so up here at the phase diagram it's shown by the blue line. And it's definitely above the curie temperature for iron, so it is taken in this paramagnetic state. However, the operational temperature for reactors is about 800 Kelvin at Mach, so we are still in the ferromagnetic case. So is it possible to use this mixing enthalpy nice and parabolic for understanding the properties of iron chromium alloys? We can do calculations. Mixing enthalpy can be calculated nowadays. I will not go into details of these calculations. There can be find and papers which are inside here. But to make the long story short, this graph shows the calculated mixing enthalpy as a function of chromium composition and it shows calculations for two cases. Blue color indicates calculations done in the paramagnetic state when all the magnetic moments are disordered. And the red line, red symbols and black symbols indicates calculation done in the ferromagnetic states where all magnetic moments are ordered. So the blue lines can compare with experiments from Halvin, as I said, from Halvin's handbook. And this is a typical accuracy of ab initio calculation. So you can see that nowadays mixing enthalpies can be calculated really, really with very high accuracy. I use electron volt units here, electron volts per atom, which are probably not that familiar for our people. So if you want kilojoules, you have to multiply this number by roughly 100. Anyhow, so we are able to reproduce this nice and parabolic behavior in a paramagnetic state. But in a ferromagnetic state, we see qualitatively different behavior. We see that the mixing enthalpy exhibits very strong deviation from the regular solid solution behavior. And we see that iron chromium alloys are particularly stable at 10% composition. The mixing enthalpy is actually negative. And when we were able to go to experiment and find an indication that it's really true. So this is what about normal things. But we can do with theory more. For the next generation of reactors, people now try to allow it with nickel, manganese and molybdenum. We can do calculation adding with many elements. It's from ab initio point of view. It's definitely not a problem. Go from the binary to multi-component alloys. And we can see how these elements influence the mixing enthalpy. And in particular, we see that this stability is significantly reduced. And the tendency to spinodal decomposition moves to low chromium compositions. Let me give another example of our calculations. Calculations of impurity solution energies for niobium, vanadium, carbon, and nitrogen, which play a very important role in carbonite-red formation. Because we are interested in this system in austenite, which is FCC iron. And FCC iron is paramagnetic. So once again, we are dealing with disordered magnetism. And magnetic moments are alive. So to describe this picture, we had to develop a new model, which is a supercell realization of a disordered local moment model. We consider the system with disordered distribution of magnetic moments. And then we take, say, carbon impurity, put it on an octahedral site, and then go all the entire system putting this carbon atom in different magnetic environments. So it corresponds to the situation that carbon atom experience in real life, that its magnetic environment changes a lot. And calculations show that the energetics are very different depending on the atom's seats. But because these changes are very fast, they have to average over all possible distribution. And in this table, the final results for the impurity solution energies are presented. I would like to call your attention to the green column, which is our best type of calculation. So these are calculations which show how wrong theory can be if we don't include many effects into the account. And then we can compare it to the experiment. And the experiment we find for carbon and for nitrogen, we can see that we can pretty much well represent trends. There is a little difference between the calculated and experimental impurity solution energies. Well how important is this difference is up to engineers to understand. But I want to show, for example, one of the parameters which we were asked to calculate, which is a solubility product for vanadium carbides and vanadium nitride and neobium carbides and neobium nitride in austenite calculated based on these models. And the solubility product represents the maximum product of composition of carbon and nitrogen or vanadium and neobium, which is still dissolved in austenite. So if you exceed this product, you obtain the precipitates. Here are our calculations shown in red line. And the different experiments are shown in blue and green. And you can see we do our calculation up to our best at this point. We are in very good agreement with experiment for vanadium, little bit differences present in neobium. Let me move to the final part of my talk and show that using the magnetism, we can influence interatomic interaction and this gives us an opportunity to design new material. So this is an experimental phase diagram for iron nickel and we are particularly interested in permalyl nickel free iron, which exhibits all the disorder transition at 516 degrees Celsius, which is just 100 degrees Celsius less than magnetic phase transition when the system becomes paramagnetic. So basically, if I want to model the system with completely ordered magnetic moments, I have to do calculation somewhere here at zero temperature for completely disordered model, which I showed you before. I would be here. But my actual transition happens when the magnetic moment just starts to disorder a little bit. Basically, in this situation, so magnetization is reduced by 40%. So I have still 0.6 saturation magnetization present in my system. So here I have, so what I wanted to show, what I do, I want to calculate this phase diagram assuming these three models for my magnetic state. This is part of the answer to your question. So if I assume complete order of magnetic moments, I obtain the right topology of the phase diagram. So this is a nickel rich part of the phase diagram. Red dashed line corresponds to the experiment. But I overshoot my phase transition temperature by 300 degrees. If I assume complete disorder of magnetic moments, I undershoot my order of transition temperature. And the span is actually 500 degrees Kelvin. So taking different types of magnetic order gives 500 Kelvin in simulating over phase diagram. So only taking a right magnetic order, I can reproduce the phase diagram correctly. And theory would not be theory if I cannot do it self-consistently. So for expert, the details are shown in this paper. But self-consistent theoretical temperature can be calculated in good agreement with experiment. So what was important in the example before is that different types of magnetic order obviously influence the phase transition temperature. And because of these, we can try to use magnetism in designing new material. So this is my final example, and I really try to go through it very, very quickly. So I'm going to talk about iron-silicon phase diagram. So we have DO3 structure, iron-free silicon, B19 iron-silicon, iron-2-silicon, the high-temperature hexagonal phase is known. But the B2 type phase was not known at least very well. It is not present at this phase diagram. So we did calculations of mixing enthalpy for this material in the paramagnetic magnetically disordered state. And to our great surprise, we find out that B2 iron-silicon is actually the most stable phase in the paramagnetic state. Of course, it's not stable if you cool the system. The DO3 should be stable as it is. But if we're able to keep our system paramagnetic, Monte Carlo simulations carried out by us showed that the B2 iron-2-silicon phase can be synthesized. Of course, you also have to apply not only high temperature, but high pressure to suppress B19 phase, which we don't want to have in these simulations. So this phase is metastable. Calculation shows that it should decompose in DO3 and iron-5-silicon-free phase. But at least it should be possible to synthesize it if you do synthesis at high temperature where the system is paramagnetic. We went to our experimental colleagues and they were able to synthesize some nice polycrystalline by single phase samples with the composition of iron-2-silicon. And this graph should be read from the bottom to the top. So we can clearly see iron-2-silicon lines on this graph so they're very sharp. And then we only hit the system and decompose to epsilon and other phase as it should. But B2 iron-2-silicon material was synthesized in high pressure, high temperature synthesis by tuning the magnetic state of the system. So now my conclusion that relevant materials parameters can nowadays be calculated abinitio of the accuracy comparable to experiment. But it is essential to carry out simulation at realistic condition. And temperature-induced magnetic excitation in particular are important. And by tuning the magnetic state we can synthesize new materials. Thank you very much. Thank you very much. Can I ask you since we are asking questions about short range order and how that fits into your limit between the disordered local moments and the collinear magnetic ground state. So does your scheme include explicitly short range order at the intermediate temperatures? So the chemical short range order at intermediate temperatures can be included within supercell technique without any big problem. And also in supercell and Monte Carlo calculations which we use, the chemical short range order is explicitly included in simulations. Including in simulation magnetic short range order is still a challenge. There are several walks, for example, Andrew Luban at Royal Institute of Technology has developed new technique which probably is a good one to include in magnetic short range order. But magnetic short range order is still a challenge. So this would require you, for example, to go beyond the single site CPA? We definitely have to go beyond the single site CPA, but we also probably have to go beyond the collinear description of disordered local moment picture. So the full non-collinearity needs to be included. In the case of the iron chromium enthalpy anomaly, is non-collinear magnetism critical? Not in terms of comparison with experiment at 1600 Kelvin, which I presented. So it obviously, disordered local moment, obviously works good. But around the phase transition, magnetic phase transition, it may contribute. So if you need very high quality simulation parameters you need to take into account magnetic short range order. Also, not the short range order, I was talking about the non-collinear magnetism. Non-collinear magnetism, yes. So would determine the enthalpy to some measurable extent? I expect so, because if we go from the situation with the minimum to the fully symmetric situation when we change magnetic order, obviously, our type of magnetic order is going to be important. Yes, I think so. I haven't done this calculation. I was unable to do this calculations yet. The graphs you put up originally showing the disagreement between experiment and theory predicting the bulk modulus. Yes. Are you essentially saying that the assumptions they use to do those calculations don't take account of the magnetic moment, which does have an effect on, I guess, the bonds, different things? Yes. And then later on, obviously, you show that it has an effect on enthalpy. Yes. Not entropy, I guess, because that's statistical entropy that we refer to. Statistical. I mean, I'm just, I'm asking. Yeah. So statistical entropy is obviously not affected by magnetism, except through the interaction between atoms, because you can have different type of chemical short-range order. But you have to include magnetic entropy, which is also very interesting question. There is a very simple model how to include magnetic entropy, and it's unclear how well this model works. Okay. I guess the impression I'm getting here is that we've missed out some of these assumptions when we calculate things like modulus and entropy and enthalpy previously. And this kind of worries me, because I'm wondering if, you know, for example, all the phase diagrams that we're using, which depend on those calculations, are they going to, is that going to have a sort of major effect, or will it be quite minor? It depends. So in CalFET community, the realization of the importance of magnetic entropy is a well-known problem, and magnetic entropy is included in all calculated phase diagram using thermocalc. The question is how well we describe the magnetism. Okay. And that's a task for the theory. Yeah. This is probably a very simple question for you, because I don't understand magnetism at all. But you mentioned in passing spinodal decomposition. Yes. In the ion-chrome system. Yes. So are you suggesting, or is there any evidence that the misability gap in the ion-chrome system is actually due to magnetic perturbations? Oh, yes. Okay. Yes. So the magnetic nature of decomposition in ion-chrome is pretty well understood nowadays, and it is related partly to the frustration of between chromium, but its magnetism is determined. That is fully correct. So the short answer is yes. The long answer probably requires another 20 minutes talk. As a student, Harry told me about this two gamma state, gamma one and gamma two in austenite. It just leads to invoralize. Yes. So going back to your Google's way, how can I develop a new invoralize with your composition instead of using nickel? Is there any way of doing this right now? Yes. So first of all, we have published paper in 1999 in Nature where we actually think that it's not two gamma state. So by including non-colinear states into account, me together with Mark von Schilegard, who is now in London, were able to show that there is an infinite family of states which feel the gap between high spin, high spin ferromagnetic and low spin anti-ferromagnetic state. And with this one contribution of the theory, so we are nowadays able to see what is really going on in the system. In terms of how to use this knowledge for the design of new iron nickel alloys. So at present, there is only one way. So you should talk to experts, say to me or to Mark von Schilegard. Yes. And we can try to do calculations at more realistic conditions. Our dream is, as I said, that we pre-calculate all kind of parameters depending on the magnetic state on this non-colinearity and you are able to go to this database and find them. Thank you. What is the ductility of the FE2SI? Do you know? No, I don't. Good. As I know, industrial alloys on the base of iron and silicon contains no more than 6% of silicon. Could you explain the reasons of your interest to alloys with high concentration of silicon? It was my first question and the second question is, if we are speaking about alloys with properties which doesn't depend on structure, you can predict these properties, you can predict these parameters of these alloys. But if we take the account structure or parameter or smoker structure, for example, what will be your next step to improve your calculations? Thank you very much. These are both very relevant questions. So my interest to iron and silicon system actually came not from metallurgy but from the studies of the Earth core. So in 2003, we published a paper which demonstrates explicitly that pressure-induced phase decomposition in iron and silicon system in the formation of B2 iron silicon at ultra-high pressure, which is relevant for the understanding of the so-called double prime layer at the boundary between the solid, sorry, liquid core and the mantle. So the interest to high concentration iron silicon comes from there. So the work which I presented now was a kind of a side project where we try to understand the influence of magnetism on the phase stability. In terms of influence of microstructure and how to include microstructure in my calculation, I don't think that I can go much beyond this multi-scale modeling scheme, which I started my presentation with, and basically to try to compute parameters for each phase. We can look at the interfaces now, for example, and we can provide these parameters to experts in higher level theories. One cannot do everything yourself. I have this quick. I was possible when you mentioned that the missibility gap is... It's on. It's on. Missibility gap being strongly related to magnetism. Does that mean that you can control through magnetism, spinodal decompositions? That's a very interesting question. I theoretically, yes, but I don't think that you can do it in practice because all the fields which are available at this point are too small at the energy scale of the phase stability. Yes, yes, and phase stability. These are simply very different energy scales. So we have two small fields in our laboratories. Thank you very much, Igor, for an excellent talk. I apologize. Thank you. Thank you.
A lecture given by Igor Abrikosov, at the Adventures in the Physical Metallurgy of Steels (APMS) conference held in Cambridge University. The emphasis is on how magentic properties play a role in the properties of iron. Ab initio simulations based on the Density Functional Theory (DFT) are known as a useful tool for prediction of materials properties and for their understanding. In this talk we review recent progress in applications of DFT for Fe-based alloys. We underline a necessity to take into account explicitly temperature induced magnetic excitations. We show that magnetic and chemical interaction in Fe-based alloys are deeply interconnected, and strongly affect each other. We start with relatively simple examples, and show that there exists very strong dependence of thermodynamic properties, like elastic constants, structural distortions, and mixing enthalpies, on the underlying magnetic state in Fe alloys with Cr, Mn, Ni, V, Nb, C, and N. We then show that effective chemical interactions in steels can be tuned by its global magnetic state, which opens exciting possibilities for materials synthesis. Using first-principles theory we demonstrate that in Fe-Si system the magnetic disorder at high temperatures favour a formation of cubic Fe2Si phase with B2 crystal structure, which is not present in the alloy phase diagram. The experiment confirms the theoretical predictions, and the B2 Fe2Si alloy is synthesized from Fe-Si mixture using multianvil press.
10.5446/18629 (DOI)
Music Okay, so the first talk is going to be given by Francisca Caballero, who is from the National Center for Metallurgical Research in Madrid in Spain, and she's going to be entertaining us with a talk about the distribution of atoms in Nanostructure Bay Night. So, Francisca. Hi, good afternoon. The work I'm about to present is a collaboration between the Spanish National Center for Metallurgical Research and our National Laboratory. And in this work, what we have done is just to track carbon distribution during Bay Night reaction at the atomic scale using atom proof tomography. I believe that the most of you know that there has been, since the discovery of Bay Night, there has been much discussion on the mechanism that controls this transformation. If you check early literature, you can find at least two very different explanations of how this reaction takes place. You can read that Bay Night transformation is at this plethora of transformation, that is plethora theory, that is stated by Nietzsche, for right-four bison, and that the transformation is essentially martensitic in nature. That means that individual atoms will no move less than one interatomic spacing during the reaction. In the literature, for the same years, at the same time, you can read the opposite explanation. Bay Night transformation is a reconstructed transformation, and the transformation takes place by movements of thermal-activated atoms, and that Bay Night right grows by the movement of growth lens on broad phases of the interface. However, today, I think I believe it's generally affected that Bay Night transformation is a displacid transformation, and that occurs since experimental evidence on the invariant plane strain surface relief effect were provided by Professor Badicia using atomic force microscopy. However, displacid transformation does not always imply diffusion less transformation, and nowadays the discussion is focused on the role of carbon during the reaction, on the role of carbon on the binary ferrite growth process. If you read the literature, nowadays you can find, again, two different explanations. You can read that binary ferrite growth is super saturated in carbon, and after that plate of binary ferrite is four, the carbon will partition into the austenite, or can precipitate inside the binary ferrite plate at lower temperature, forming what we know lower binary. That's the function of diffusion less explanation. But nowadays you can also read that the transformation by the ferrite growth is a carbon diffusion control process, and it's completely the same transformation, the same type of transformation that we must have in ferrite. And by nitite ferrite growth is carbon diffusion control, and if we have precipitation during the reaction, cementite will precipitate on the austenite ferrite interface at the same time that is moving. What we have to do to check what is the process that is taking place during binary ferrite growth? What we need to do is just to study, investigate that very early moment of the transformation when we have the very first binary ferrite plate, and to measure the carbon content in that plate. If the carbon content corresponds to the carbon in the paren austenite, then we have for sure a diffusion less process. If the carbon content is much lower and corresponds to that given by the equilibrium, then the growth for sure will be carbon diffusion control growth. But unfortunately, and you can understand why from this very simple calculation of carbon diffusion, for the temperatures at which binary is 4, that can be between 400 and 450 degrees C, that time needed to fully decarbonate that very first binary ferrite plate, it will be less than a second. Then we cannot investigate from an experimental point of view that very early moment. We have been doing all these decades. In a state, we have looked at the carbon content in the residual austenite when the transformation had finished. That's what we call the incomplete reaction phenomenon. It's an indirect validation of the diffusion less nature of the transformation. What we do is just to measure, for instance, by X-ray analysis, the carbon content in the retained austenite. If that carbon content, when the transformation had finished, followed the thermodynamics limit, this AE3 line, then we can state that the growth was during the transformation carbon diffusion control. But if instead the carbon content is much lower and followed what we call the T-O limit, then we can state that the growth was diffusion less. That's why what the T-O line means. The T-O line, what it means is that when the transformation has stopped and we have that carbon content in a balance of free energy, this is free energy and this is temperature, on a balance of free energy, that means that beyond this point, transfer austenite to binary ferrite of the same composition by the diffusion less process will be forbidden according to thermodynamics because we will increase the energy power system and all we know that that's not possible. Then even that we have lower amount of carbon in the retained austenite than the equilibrium, even that we have not approached the equilibrium and we still have retained austenite, the transformation will stop. That's what we call the incomplete reaction phenomenon and we have been validated by the Nittike ferrite growth process looking at this phenomenon. On the right you can have a for instance an experimental test where we look at the by X-ray analysis and the incomplete reaction phenomenon and the T-O line. You can see we measured the austenite carbon content with the transformation had finished after the austenite composition of a medium carbon high silicon steel at different temperatures. It's quite clear that when we transform, when we decompose austenite in the binarity region between Bs and Ms, the carbon content when the transformation has been complete followed the T-O line. Over Bs when we transfer to with mastatin ferrite and we still have retained austenite and we make sure that avoid precipitation of fomentite it followed the per-equilibrium value. This is an injured verification of the diffusion less nature of transformation but it still insists that it will be really nice to be able to see and to measure the carbon super saturation in the binarity ferrite during the reaction when the transformation is taking place. We thought that there is no transformation kinetics that it cannot be very sexy for industry. It can be really, really nice to solve this fundamental problem. Here you have, I'm sure that you have already heard about the development of high carbon high silicon steel that when transfer at 200 degrees C evolves the austenite composition is a nano stretcher mixture of bignitic ferrite and retained austenite. This is still a way to show much interest in industry because of the mechanical properties. You will hear in this conference much more about that, more details but for me it was really interesting because in my opinion solve the fundamental problem that we have. And you can see talking about it as I told you that this is a very low kinetics process. You can see here on the left some kinetics data and we are measuring here the evolution of the different phases as a function of time for this high carbon high silicon steel transforming at 200 degrees C. The transformation will take place between two and six days. Then I believe that we will have time to look at how the bignitic ferrite is the carburizing when the transformation is taking place. That's what we did and the first thing we did is to use X-ray analysis. And you see here in green you see the evolution of the phases, how the transformation progress. In blue you see the carbon enrichment in the retained austenites and in red you see the carbon content in the bignitic ferrite as a function of time. Again we validate here the incomplete reaction phenomenon and it's quite clear that when we reach the Tio value after 150 hours beyond that point we don't get additional bignitic ferrite formation. And we don't get additional carbon enrichment. Then the transformation has stopped. But we were not able to monitor the corresponding decarburization of the bignitic ferrite. At this point we knew that X-ray analysis is not the right technique to look at the carbon super saturation in the bignitic ferrite. And that's because with X-ray analysis always we have average values of the carbon content in the, in phases in the retained austenite or in the bignitic ferrite. And if we have some local carbon enrichment in a bignitic ferrite plates we will catch that carbon in our measurement. Those measurements does not correspond to carbon insolid solution in our bignitic ferrite. That's how that's how we approach atom proof tomography. We need a technique that allows us to determine the carbon content locally at the nano scale and a way of any possible carbon enrichment regions. And here you have a nice example of a needle safe atom proof sample. With this technique we are able to reconstruct in three dimensions the position of different atoms. Here you can see the carbon map, carbon distribution map and every point corresponds to a carbon atom. But we can have the same information for other substitutional solutions. The big region on the right is a high carbon region, that's retinostinate, and the low carbon region in the left that's bignitic ferrite. And here in this example we already have those carbon enriched regions in the bignitic ferrite close to our interface. This is a austenite bignitic ferrite interface. What is interesting is that we also can quantify the level of carbon insolid solution in the right. And for this particular case that corresponds to a high carbon, high silicon still transfer at 200 degrees C for 10 days. That's after completion of transformation, the level of carbon in the retinostinate is comparable to that given by X-ray analysis and the T-value. And the level of carbon, a way of those carbon enriched regions in the bignitic ferrite is lower than that given by X-ray analysis, but still higher than that given by the Tata equilibrium. Then with this technique we were able to see that carbon supersaturation in the bignitic ferrite. Here you see the results as a function of time. And this is a tomography results, you see again the evolution of the phases with time. The blue points again correspond to the carbon enrichment of the retinostinate. It's quite clear here that we have a very wide aero bars and we will come later, what is the reason for that? Aero bars for any time correspond to different atom proof samples that have been analysed. And it seems that we have a huge dispersion of data for a given treatment. But, and it is more clear on the right where I change the scale on the Y axis, that it's quite clear that this time with this technique we can see the decarburization of the bignitic ferrite and the carbon supersaturation in the bignitic ferrite is evident during the whole bignite reaction. Of course we were not able to see and validate that very early moment with the fully carbon supersaturation in the bignitic ferrite, with the carbon content of the parenostinate, because we are at 200 degrees C where carbon still can move. But we are aware that there's low kinetics for bignite, it has been always a traditional argument for that diffusion explanation and theory. And it makes sense because we cannot explain how the carbon can be trapped inside the bignitic ferrite, the interface is moving so slowly, that is hard to understand. For this reason what we did is to perform the same carbon content determinations for three very different stills. A medium carbon low silicon still that transports to upper bignite with intralasthemen type precipitation during bignite reaction, transports to lower bignite with intralasth precipitation at the same time that the bignite reaction is taking place. A medium carbon high silicon still that at higher temperatures transports to carbide free bignite, at lower temperatures we have intralasthemen type precipitation during the bignite reaction, and an additional temperature for the high carbon high silicon still. The three still have very different kinetics and we are able to track the carbon super saturation in the bignitic ferrite a wide range of transformation temperatures. And you can see here the results. The results correspond for the end of the transformation. And it was quite evident that for transformation temperatures below 375 or 350 degrees C, even when the reaction has finished we are still candidate and observe the carbon super saturation in the bignitic ferrite. The carbon content in the bignitic ferrite for higher temperatures approach already the equilibrium. But what is interesting is that the tendency for the carbon super saturation in the bignite ferrite function of temperature is quite similar for higher and lower temperatures, the continuous behavior. And it's independent of the cement of if we have or not precipitation during the bignite reaction at the same time. In my opinion what we have here is enough evidence experimental evidence that bignitic ferrite grows super saturated in carbon but when we transfer the still higher transformation temperatures all the secondary processes that are controlled by carbon diffusion are activated. And what process are they? We have investigated those processes at the atomic scale as well. First of all is the carbon partitioning from the bignitic ferrite into the retainment austenite. And to investigate that we were able to detect that in bignitic microstructure and these are samples corresponding to the high carbon, high silicon steel transfer at very low temperature but it's not exclusive of these very sophisticated steel. It happened also for micron carbide free bignite that different size of retainment austenite trap very different amount of carbon content. And blocky austenite they have much lower carbon content that is micron nano scale fields of retainment austenite. And that's really beautiful for our microstructure because we know that with carbon we can mechanically stabilize our bignite. And that's a way with a wide range of size of austenite and a wide range of carbon content we are able to control and to have a progressive tribe effect that allow us to enhance the utility and toughness in these types of steels. But look to this transmission electron micro-abs because it's quite clear that close to the ferrite austenite interface in bignitic structure we have a high dislocation density. And when the carbon is moving from the center of the bignitic ferrite into the retain austenite you will find this free space and it's not a strain what we found by the corresponding atom proof tomography. That is that carbon will segregate on those dislocation and think about that is and we have an extra strengthening in our microstructure through cultural atmospheres. And finally depending on the transformation temperature and precipitation kinetics maybe femantite can fall before that carbon will escape from that bignitic ferrite. In that case we will have interlock precipitation in the ferrite and what we have what we all know by lower binaurid. I believe that at this moment we have plenty of experimental evidence that we can state that binatural formation is displacid and diffusion less in nature. Thank you very much Francisco. Very entertaining talk as well. Thank you very much. Throw it open to the floor as we have done so far today. Any questions? Yep. You talked about dislocations accumulating in the retained austenite. I wanted to ask do you know is there any work being done where the slip can transmit from the austenite back into the ferrite? Are those interfaces opaque or transparent to glide? Yeah those dislocation are generated by plastic accommodation of the retained austenite during binaural reaction. But if we think in the crystallography a match between over binaurid ferrite and the parent retained austenite we have some possible plant matches that it can give us the idea that those dislocation can be edited and transferred from the retained austenite to the binaural ferrite. Francisco you showed a very nice graph with the carbon in ferrite going very low continuously going up to all the way to C bar probably finally when it reaches the modern city transformation. So there was a work long time back then on the couple diffusion of displacid growth. Can we develop a model rather than calling binaurid ferrite and modern site and there's a continuous flux going from zero to two. This is a chemical analysis a carbon in our binaurid ferrite. We don't have crystallographic information in atom proof tomography but nowadays there are investigations on what is the crystallography of that ferrite that is trapping so much carbon. That's one approach to the question that you said why so much carbon and another approach that I think that we could then forget is that when we see the homogeneous distribution of carbon in the binaurid ferrite by atom proof tomography it look like it's in solid solution and I don't doubt that but we still have also vacancies as in martensitic still and if I have a vacancy in the binaurid region I ensure that carbon will be comfortable as well but then not going to notice. When it is in these locations we can notice but when it isn't the vacancy is not. Then I think in on different systems BCT FCC but why not BCT FCC vacancy system to explain that level of carbon in our structure. There is no tetragonal team. There is no what? The tetragonal team. Harry has been investigating that and not evidence so far. Calculations can be an explanation but experimental evidence are not conclusive yet. First of all your original question. So there's no evidence for continuity otherwise there's nothing to stop the reaction from proceeding to the ferrite equilibrium curve and the evidence for tetragonal team is in Scripter. Experimental evidence. You were not very clear in your conclusion. What do you like? The question is my students are they said BCT why would you call it ferrite? And I said I would tell BCT to call it. Can you give me a little more? Okay. Enrichment of carbon atoms at dislocations. Does it involve diffusion? But your conclusion is the result of diffusion. You were right. Since we have dislocation at the interface, carbon partitioning into the retain and austenite into the residual austenite is not as high as where we can expect. And it's like how much carbon can transfer? It will be lower. And also enrichment of carbon. It needs diffusion. Another question. Secondary processes. I understand your point. Another question is formation of nuclear, nucleus of carbide. It also needs a jump of carbide. It's impossible. We have seen in the binitic structures, if we have seen the carbon super saturated in the binitic ferrite, the enriched retain and austenite at dislocations with the carbon, if we have a thermant type precipitation, interlacent type precipitation, that is evident by TEN or atom proof tomography. But in terms of nano clusters or carbon cluster, what we have seen is that with longer agents, much later than the transformation has complete or during a subsequent temporary, those carbon enriched dislocations evolve in carbon clusters and those carbon clusters, we believe are the perfect nucleation plates for epsilon carbides. Thank you very much. A question is you show in the analysis that for thinner austenite grains, you have more carbon than for thicker. Do you have any comment about that? Yes, it's a question that how far the retain and austenite is on the progression of the reaction. Then by those micro nano scale fields are between binary ferrite plates and blocks are between sieves and can be observed by light optical micro-rub. But we have a maximum volume fraction of binite that can be for according to the incomplete reaction phenomenon and those blocks will be the remaining austenite. Yeah, lovely work. The idea of high silicon is to suppress cementite formation and you showed a nice image of a very small cementite particle. Is it surrounded by an enriched area of silicon? Did the silicon actually have to diffuse out of the way at 200 degrees C to allow the cementite to form? Okay, I believe that this particular case you can see that the silicon we can consider is homogenously distributed and is trapped inside the cementite particle. It's a para-equilibrium cementite particle. Interlase for a city tank. We have some questions from the guy from internet. Sujeh Chakka asks, what is your thought for low carbon steel where the bainite transformation happened at higher temperature, especially for upper bainite? Do you still believe that upper bainite is a displacive in nature? Okay, I agree that the carbon super saturation study has, we didn't go lower in carbon content, that 0.3 weight percent. I believe it can be much more complex because we have a higher bainite of morphologies and everything. But I believe what Professor Jam saw us today. Granular bainite is, it can be applied like bainite and it can be fought by the same mechanism. I agree that we don't have experimental evidence now on the table, but I believe we have a wide range of temperatures and we can think that we will have the bainite for right growth by the fussionless process and other secondary processes will be anticipated even earlier and earlier, the higher transformation temperature. I believe that there is no different types of bainite, just bainite. Thank you. Another question. Your work is based on mixture of bainitic ferrite and austenite. What about classic bainite? This means mixture of ferrite bainite and carbide. Do you observe the same enrichment of ferrite? Yeah, that's the reason why we test the medium-carbon low-silicon steel which is a conventional bainitic steel. That's for bithewalic steel. That's for between 375 degrees C and 525 degrees C and you have in this slide, I don't know if we can point out what is like, we have two examples of transformation for that steel, 500 and 375. 500 is a conventional upper bainite and 375 is a conventional lower bainite and carbon super saturation was not detected because this cement type precipitation was carbon super saturation correspond to those blue points but still 13 carbon super saturation for 375. But of course, for higher transformation temperatures, all these precipitation processes take the carbon away of the carbon in bainite ferrite and we cannot detect because it's already precipitate. Okay, we just about have time for one more quick question. So at the end of your presentation you mentioned about the crystallography of the ferrite. So this is rated with plasticity or it's rated with carbon saturation to its interface because if it's carbon segregation that's playing a role, then maybe you should take a look at the interface and then the question is how are you going to do it by APT? Okay, the carbon in bainite ferrite I think that we have evidence by that tomography that is homogenously distributed. There is not carbon segregation as a interface as a carbon peak as a interface. What we have as an interface that can make very difficult, the crystallography analysis is that we have as an interface a high dislocation density and for high resolution density and it can be really hard to determine the actual structure. A part of the carbon super saturation we create also distortion in the bainite ferrite. Just meaning plasticity, what do you consider? But I don't know what you mean with plasticity. This location or this location at that interface. Okay, yeah. Okay ladies and gentlemen I think we'll have to leave it there and move on. So we could all show our appreciation for this.
A lecture given by Francisca Garcia Caballero, at the Adventures in the Physical Metallurgy of Steels (APMS) conference held in Cambridge University. The atomic mechanism of the bainite transformation is discussed in the context of the highest resolution analytical experiments conceivable. After decades of debate on the mechanism for the formation of bainite, it is accepted that bainite grows via a displacive mechanism i.e., as plate-shaped transformation products exhibiting an invariant plane strain surface relief effect. But there is still much discussion on the diffusion or diffusionless nature of bainite. Elements of the theory are now routinely being used in the design of innovative steels and in the interpretation of a variety of experimental data. However, current experimental and theoretical understanding is limiting technological progress. The purpose of this atom probe tomography study was to track the atom distributions during the bainite reaction in steels with different carbon and silicon contents transformed over a wide range of temperatures (200-525 centigrade) to elucidate the role of reaction rate and diffusion in the formation of bainite with and without cementite precipitation. The results are providing new experimental evidence on subjects critically relevant to the understanding of the atomic mechanisms controlling bainitic ferrite formation, such as the incomplete transformation phenomenon, the carbon supersaturation of ferrite, the plastic accommodation of the surrounding austenite and cluster and carbides formation.
10.5446/18628 (DOI)
Music Applause Welcome to the meeting and we are now ready for the first lecture which will be given by Professor Toshikoseki from Tokyo University of Technology. Music My colleagues here, actually Nambu is over there, who have contributed to the research that I'm going to talk about today. The title of my talk is Architectural Steel. This name was given by Harry when I gave a talk somewhere else on our multi-layer steel because I like the name. I used the name again here. So all of us here know that many different kinds of steel were developed in the 20th century and the performance and the properties of the steel significantly improved over the century. Those developments and advancement certainly thanks to theoretical basis that was also developed in the 20th century and also achieved by a low design with a different combination of rare metals and also achieved by microstructure controlled by some mechanical process and also by high purification. Also, we have used fully the strengthening mechanism to develop different steels. So it looks like to improve the steel we have done almost everything but still the demand for high performance steel is never ending. And with increasing the environment consciousness, the demand becomes stronger and stronger. For example, if we look at the automobile steels, higher strength and higher ductility demanded. So we need to meet the demand. We have to go this direction. How can we do that? We may need new alloy design and we may need a new micro and nano microstructure control that we haven't tried in the 20th century. And also we may need ultimate refinement of microstructure and grain structures. Alternatively, we may need the externally architecture steel, externally designed steel where we can get away from monolithic steels and we can part from the some dynamic restriction in the design of materials, in the design of steels. So this is my proposal here. Today I'm talking about externally designed multi-layer steels. This is an example of multi-layer steels where we combine high strength as fetched mountain site and high ductility steel. And to achieve a combination of high strength and high ductility. And although this is a 25-layer steel, the number of layers could be fewer depending on the combination of steels as I will mention later. To fabricate multi-layer steels, we stack the steel of our interest and then hot road or warm road or even cold road for bonding. And finally we heat treated to achieve a desired microstructure and to increase interfacial toughness. And for the high strength layers, we use as fetched mountain site and for high ductility layer, we use authentic steels or ferritic steels or trip steel or dual phase steels, whatever steel that has ductility. In other words, you can combine any steels of your interest and sometimes we insert cynical layer to prevent carbon diffusion between the layers. Why the layers structure? To achieve the high elongation, we need to elongate the mountain site, as fetched mountain site. In a dual phase steels, mountain site is not deformed because the stress is not partitioned to the mountain site. And there is a stress concentration in a ferrite matrix, particularly between a mountain site island which results in a voiding and eventually a fracture. In case of layer structure, stress is partitioned, both ductile layer and high strength layer. So with a plastic constraint, the mountain site should be elongated as long as the local fracture is suppressed, which is not easy. These are the data of strength and ductility of laminated metals, different laminated metals, which was summarized by Ligiere and Charby. And the strength always follows the rule of mixture, rule of average along this line, but the ductility does not follow the rule of mixture. This vertical axis is elongation of laminated metals consisting of 50% ductile and 50% brittle component. And this elongation comes down here, far below the rule of average, as the elongation of brittle component becomes low. Why do we have such low elongation? Because we have a brittle fracture during elongation in the brittle layer, as shown here, which is caused by a delamination and resulting H-shaped cracks and also so-called tunnel cracks. We have to suppress those local fractures to obtain larger elongation. In terms of delamination and H-shaped cracks, naturally increasing interfacial toughness increases elongation of multi-layer steel. And when there is a delamination, brittle layer behaves like a single component and without plastic constraint, it breaks, it fractures with low elongation like here. And if the interfacial toughness is not enough, the elongation is still below the uniform elongation predicted by a rule of mixture. And with increasing interfacial toughness, the multi-layer steels are fractured with diffused necking in a ductile manner. This boundary can be predicted by this equation, which was also developed in the research of composite. And this criterion is given by this line. Looks like this criterion works well. For the prevention of tunnel crack, there is also work about tunnel crack in the research of semi-conductors, which is given here, where the thickness of a brittle layer is limited, which is a function of a fracture toughness of brittle layers. So you need to reduce the thickness of brittle layer to avoid the tunnel crack. But this criterion was derived in an elastic situation. In case of metals, you have a plastic zone in a ductile layer in the vicinity of the tunnel crack. We have to consider that. Considering the elastic situation, we derived this criterion. Again, the thickness of brittle layer should be reduced to increase elongation, which is a function of fracture toughness of brittle layer and the use of strength of ductile layers. Certainly, the decrease of the layer thickness increases the elongation of the ascended mountain side. By decreasing the thickness of mountain side, this type 420 high carbon, mountain acidic stainless steel, can be elongated up to 20%. So without multi-layer structure, we can't elongate the ascended mountain side in this way. Also, the fracture surface changes from brittle to ductile dimpling as the thickness of mountain side layer is decreased. This is the effect of thickness of brittle layer on elongation. These two lines are from the elastic model and elastic plastic model. Many use authentic type 3.04 stainless steel for ductile layers because type 3.04 stainless steel has good work hardening. The transition from low elongation to the high elongation is close to the elastic model. When you use interstitial free ferrite as a ductile layer, because this steel does not show much work hardening. The transition behavior from low elongation to the high elongation is close to the elastic plastic model. In a multi-layer steel, the transition from low elongation to the high elongation is somewhere in between. The elastic plastic model we developed gives a lower boundary in the design of multi-layer steel. Also, the limitation of the brittle layer is the function of fracture toughness. These are confirmed here. We provided the mountain steel having a different fracture toughness and measured the transition. Increasing fracture toughness gives the thicker brittle layers. In other words, you can increase the thickness of brittle layer if you have a mountain steel with a better toughness. Or you can reduce the number of layers. By controlling the interfacial toughness and the thickness of brittle layers, we can elongate the ascension to the mountain side. If you apply a neutral diffraction, this is the result. You will measure the fully partitioning of stress. Because of the partitioning of stress, the mountain side is elongated here. As a result, we obtained steels which have high strength and high ductility. Those are plotted here. Those steels have strength more than 1200 mega Pascal and still have elongation of more than 20% percent. The product of strength and elongation is more than double of conventional monolithic steels. These mountain steel keep the elongation even under the high strain rate deformation. Here are the stress strain curves under different strain rates. The maximum is 800 per second. If the strength increases with increasing the strain rate, the elongation does not change much. Those photos are the test results of high speed buckwheat which simulate the collision of front side members of automobiles. The 1200 mega Pascal mountain steel deforms perfectly in the same way as the 590 Duelphase steels. There is still space for additional deformation because of the high strain. There is no delamination of a local truck during this high strain rate deformation. Here is another high strain rate deformation. This is an impact bending test which simulates pillars of automobiles. The bending strength is increased in the mountain layer steels. Here is the DP590 for comparison. The bending strength is increased here. In the application of multi-layer steel, we need welding. We are trying to weld the multi-layer steels using friction-stow welding. This is cross-section. The welding is successful, which has a joint efficiency of more than 90%. It is interesting to note that this layer structure remains not only in the heel-affected zone but also in the steel zone. Using multi-layer steel, we can look at deformation behavior of a splenched mountain site, which was difficult before because of the load activity of a splenched mountain site. We are now conducting many in-situ observations on the deformation of a splenched mountain site using an EBSP. Here are the multi-layer steels. This is monolithic mountain steel. We found that the slip is always in the plane, in the plane, and after certain strain levels. Beyond that, the slip across the last direction appears. This slip is concentrated in the region where the cement factor in that plane is high and no slip in those regions where the cement factor is low. This is a similar result using a digital image correlation using a silver nanoparticle during a tensile test. Again, the stress concentration is in a mountain site block where the cement factor along the in-last plane is high. Another part is not deformed significantly, even though the cement factor is high, which is out of the last plane. The cement factor along the in-last plane is low because this is low. Further improvement of the multi-layer steels, we need to improve the process. We are now looking at lower pressure bonding, which makes the fabrication much easier and more efficient. In terms of components, not only use the high carbon, mountain-steel steels, we are now using a HTP metal such as magnesium and titanium to achieve a lighter multi-layer steels. We are now using a steel with high impurity, like a scrap steel, so that we can use a scrap to fabricate high-performance steels. Here is an example of a magnesium steel multi-layer. We have developed a good bonding process to join the magnesium steel and we fabricate a three-layer magnesium steel multi-layer. Magnesium is commercially available lightest metal, but the problem is the ductility. Because of HCP structures, elongation is up to 20% or even less. By employing a multi-layer structure, we can increase the ductility of magnesium without any break, up to 35-40%. The strength is also increased because of the steels. This is the summary of my talk. Thank you very much for your attention. I was wondering how you deal with the differential volume contractions and expansions that you get when you quench the multi-layers. Does that set up residual stresses between the layers that might result in crack propagation being easier and delamination becoming easier? Yes, you are right. There must be some residual stress. We are now measuring how much residual stress and what the effect on the mechanical properties is. Certainly, there is some residual stress. You are saying that the decrease in martensite layer thickness is increasing the toughness. Is it not due to locally plane strain loading condition starting in only the martensitic layer? You are right. As the thickness decreases, the situation comes to that. The thickness can be thicker. If the fracture toughness of the martensite layer is maybe medium or not high. Today, I showed the martensite, which is type 420 high carbon martensitic stainless steel. This is really brittle. The fracture toughness is really low. The normal carbon steel, the martensite carbon steel, is not such low. We can increase more. The situation is not simply a plane. So far, you have made these materials by starting with the original layers and then rolling them together. Have you looked at other methods of fabricating this kind of structure? Additive layer manufacturing looks like it would be well suited to a layered structure like that with electron beam or laser. Depositing powder and then you have a lot of flexibility to pick material or thickness. The multi-layer structure is employed everywhere, like in a semiconductor. In that case, as you mentioned, they deposit layer by layer. But this is a structure still. We need volume. So the easier and the simpler fabrication is better. We sort of have many possibilities, but at the moment, I think this is the simplest way. What sort of thickness are you aiming to get? You say it's easy to get good volume material. What sort of thickness and volumes are you interested in fabricating? In our case, the thickness. Anyway, this is sheet material. So the thickness is about 0.5 to 2 mm or something like that. The final thickness. A sheet quite wide, I guess. In our research, we already made 60 mm steels, coil to coil. So when you do the hot rolling or warm rolling of martensite multi-layer. So the martensite phase actually during rolling is to change back to also the night or it's in martensite. In hot rolling and warm rolling. Could you repeat again? When you do hot rolling or warm rolling of your multi-layer steels, one layer, suppose it's a martensite. So the martensite in that temperature has actually changed back to also the night or. I understand. During fabrication, martensitic steels does not have the micro structure of martensite. It's like a mixture of ferrite and cementite or something. During hot rolling. And after that, the heat treatment is made after the alternate region and the quenched. So where the martensite is formed. Thanks for a very interesting presentation. Reminds me of the fiber metal laminate work that was done a few years ago. Introducing polymers and metal layers together. I just wondered if you'd done any corrosion studies introducing dissimilar metals, often problematic. That's also a good question. We have done the corrosion test. There is a possibility that the corrosion resistant is decreased if you combine like a magnesium. Did you measure the Young's modulus of the martenite materials? Yes. But at the moment we combine steel to steel. So Young's modulus is the same. Yes, it means just follow the mixture. Yes, yes, yes. This is well known. I was wondering if you did fatigue testing and stress localization testing for formability. We are conducting a fatigue test too. At the moment there is nothing I can say about that. Certainly the martensite still affects the fatigue behavior. Did you make stress localization experiments to understand the viscoplastic behavior under forming or for forming? At the moment we have done much. Thank you. In a 3D deformation. Thank you for a nice presentation. Could you explain me in a few words what is the difference between your idea of composite material on the base of steel with, for example, the Mahagana steel or Damascus steel? Yes, maybe the origin is the same. In the past many people studied the Mahagana steel metals, even the Damascus too. But in the past it looks like the less attention was paid to the ductility. People try to increase the strength, but the research on ductility is limited as far as we investigate it. Thank you. Do you need heat treatment? Do you study how important is the diffusion of carbon between the same layers of high-glow carbon steel? Yes, it is very important to suppress the diffusion of carbon. And when we combine the different steels, we always think about the activity of carbon during the heat treatment so that maybe alloy design is needed to suppress the diffusion of carbon for brittle layer and the ductile layer. Toshie, how do you measure the interfacial strength? We use a pure test, but we can't measure if the interfacial strength is really high. Just only a brittle interface can be measured. And how do you control it? Namu, can you explain that? In the case of the interfacial toughness, the mohology of the test is the tensile direction is just 180 degrees. So the mohology and the stress direction is almost constant so we can evaluate the difference. But supposing that the interfacial strength is too low, how can I change it? Of course, in the case of very, very weak interfacial toughness, it is very difficult to evaluate. But in the case of the R2, for example, from 500 degrees C or 600 degrees C, we can evaluate the interfacial toughness. So the interfacial toughness is increased by a bonding process and heat treatment. You can increase the interfacial toughness. And I'd like to note that interfacial toughness does not be really high unless you give a pure test. Based on that, would it be possible if you did have quite a high stress at the interface to design the ductile phase so that it underwent slight plastic relaxation at the interface, which would provide a bit of work hardening and would that be beneficial in a way to reduce any stresses that you might get generated in the alloy? Would that be feasible or would the work hardening limit your layer thickness and the further work hardening capability of it? Would that then have an adverse effect on ductility, do you think? So with the residual stresses that are set up in the interface, if they were reasonably high, would it be feasible to design your ductile layer to have a yield strength such that it undergoes slight plastic deformation to relieve the residual stresses? And would that work hardening then have an adverse effect on the ductility as a result? Well, up to now we haven't found any adverse effect on that. Yes, let me be a bit adventurous here and about a little bit speculative as well. Perhaps it could decrease the carbon diffusion between layers and perhaps improve the cohesion effect. If you dealt with, let's say, nano structure steel, so you would get a strength out of a nano structure steel layer and keep your soft layer as well. This is highly speculative, you see. So it seems adequate in all of these tools to do that. What are your ideas about that? You mean the diffusion across the interface. We are now researching that and we want to minimize the diffusion layer to increase the strength and ductility for the brittle layer and the ductile layer. The diffusion and carbon decrease the strength and ductility both. So that's why I'm saying low temperature bonding is necessary to improve the performance of monthly steel. We have a feeling that the bonding of the interface is possible at medium or lower temperatures and we don't need a big diffusion to increase the interfacial strength. Did you look at interfaces at higher magnification and in general how important is the quality of interfaces? Do you have rough interfaces or more kind of intermix? We are looking at interfaces using a transmission electron microscope. Of course there is some small void and some discontinuity and some part continues. The details of the development of interfaces need to be researched more. I have a more macroscopic question. You are reducing the thickness of the martensitic layer to increase elongation. This is traditional bi-phase philosophy. You are also saying that strength is a rule of mixtures. Would you not then expect strength to drop? So you sacrifice strength to increase elongation. Which is exactly what happens in any bi-phase microscope. This is a combination. We can't go beyond the rule of mixtures. There is some compensation but we can use higher strength of steel, like a very high carbon steel, so that we can increase the strength. The elongation is always... If we make an old effort, we can achieve the rule of mixtures, even for the elongation. So you can increase... Let me explain more. Anyway, this is the mixture of high strength layers and low strength layers, and the mixture of high ductility and low ductility. We can go this way more by using higher strength steel. Thank you very much indeed. It has been an excellent talk and an excellent discussion.
A lecture given by Toshihiko Koseki, at the Adventures in the Physical Metallurgy of Steels (APMS) conference held in Cambridge University. Multilayered steels are described, including the theoretical framework for the design of such composites. Traditionally, physical metallurgy concerns microstructure-property correlation. In this approach, microstructure evolves as the product of interactions between composition and process parameters controlled by the thermodynamic and kinetic conditions. Attributes concerning the property are obtained as the function of volume fraction, size, shape and distribution of the constituent phases, usually described through empirical relations or even on the basis of imprecise knowledge. Hence, the approach is more evolutionary than constructive. Performance driven construction of the microstructure demands precise response and interaction of microstructural constituents under the given loading condition. An architecturally designed microstructure implies planning, design and construction of microstructure considering nature, size, morphology and distribution of the constituent phases on a suitably conceived topological framework. With the aforesaid ambition, an attempt has been proposed on construction of the ferrite-martensite microstructure, based on iso-strain architecture, aiming at maximum work hardening. In another attempt, the mechanical response of a topologically designed bimodal microstructure in single phase steel has been evaluated for maximizing the strength-ductility combination.
10.5446/18625 (DOI)
Okay, so, uh, I think we're going to use the other words, um, um, um, um, um, um, um, um, um, um, um, Okay, so our next speaker is from Seoul National University. It's Professor Hyun Nam Han and he's going to be talking to us about pop-in behavior during nanowindentation. So, good to you. Okay, thank you Chairman. It is to my honor to have a presentation in APMAS workshop in Cambridge. Thank you very much for the professor, Harry Bae-Dae-Sha and also APMAS team. Okay, the title of presentation is pop-in behavior during nanowindentation on steel alloys. So, my name is Hyun Nam Han and I'm working for Seoul National University. This work was supported by the Korean government and the post-core steel company in Korea. So, this work is collaborated with the wide one. So, I think, so, Oak Ridge National Lab, Oak Ridge National Laboratory in US and the Kims in Korean government research laboratory and Seoul National University and Hanba University and the post-core. So, when the mechanical property at small scale was measured, normally we use the nanowindentation technique. So, by using the AFM or SPM then, so we can recognize the appropriate position of the material. For example, each special phase or special grain on steel then, so we can obtain this kind of the load displacement curve. Is there any pointer? This one? Okay, so like this. By using this load displacement curve, we can obtain an intrinsic property of a special phase or a special grain, the mechanical response. This is a kind of the fingerprint of the material. I think the mechanical response of the material. So, by the precise analysis of this kind of the load displacement curve, we can have the various information during the mechanical response of the material, I think. So, in the nanowindentation analysis, we must consider the two special phenomena. First one is indentation size effect and the other one is probably effect of the dislocation, nucleation or dislocation source activation. In indentation size effect indicates that they are increasing the nano-hardens data with the decreasing of the indentation depth. It is well-known that the indentation size effect is caused by the geometrically necessary dislocation underneath the indentative. As for the probably effect of the dislocation, nucleation or dislocation source activation during the nanowindentation, as shown in this figure, so if the indentation size is smaller than the average spacing of dislocation, in this case, there is very low probability that the volume underneath the indentation contains existing dislocation. In this case, the starting of plastic deformation is governed by the dislocation nucleation, not dislocation source activation or dislocation multiplication. But a large indent case, normally the plastic deformation was governed by the dislocation source activation or dislocation multiplication. Okay. So, in this presentation, I will talk on the popping special behavior in nanowindentation. So, this is a normal loading and unloading curve during the nanowindentation. In some cases, so very sudden displacement excursion occurs during the nanowindentation. This is called as popping. So, this popping is kind of the softening process. So, this popping is related to the geometrical softening or material softening of the material. So, I'd like to talk on that. So, as you know, strain this martensitic transformation, this is one of the geometrical softening process. So, in this case, a large shear deformation occurs. The massive atomic, massive movement occurs. This can cause the geometrical softening. And also, if you're martensitic transformation, in this case, massive partial dislocation movement occurs. This is one of the geometrical softening event. And also, I'll talk on that the yield drop in peritone steel. This is one of the large geometrical softening process. So, I'd like to talk on this nanowindentation popping, the relationship between that, between them, between the nanowindentation popping and the yield drop in macroscopic tensile test. Okay. The most popular case of nanowindentation popping is incipient plasticity. So, as you can see, in the nanowindentation, the plastic deformation procedure are the first dislocation nucleation and the dislocation source activation and dislocation multiplication. So, the load for dislocation nucleation is normally larger than the dislocation source activation and the dislocation multiplication. So, after the plastic deformation, then a kind of the geometrical softening event occurs, then this can cause nanowindentation popping like this. Okay. So, in this presentation, I'd like to talk on the other possible source of nanowindentation popping in steel. As I mentioned above, so I will deal with the mechanical industry and mass transformation and the mechanical industry, if your mass inside transformation and the last one, so I will talk on that the relationship between the yield drop and the nanowindentation popping. Okay. And the first, I will talk on that. So, strain used to alpha-plum martensite transformation. We used this kind of material with high manganese content. So, after the appropriate heat treatment, so we can obtain this kind of the micro structure and we carry out the combination method of EBSD technique and the nanowindentation technique. Then, so we can carry out the nanowindentation on each austenite phase. Okay. Then, we obtain this kind of the loaded displacement curve like this. You can see the popping here, the other popping here, the other popping here. I'd like to know that the origin of this kind of popping. So, let's indicate the Hedgian elastic solution then. From the two curves then, the initial popping is caused by the last plastic transitions. So, from the Hedgian solution then, so we can obtain the maximum shear stress underneath the indenta. This value was calculated by the 9 gigapascal. This value corresponds to the shear modulus of over 8. This is very close to theoretical strength for dislocation nucleation. And also, under the consideration of indentation probability effect then, so normally annealed specimen, the mean distance of dislocation is 10 micrometer. But, in this case, indentative radius is just 0.2 micrometer. The popping start depth is 20 nanometer and the size of austenite grains is just 1 micrometer. So, from those data then, so we can conclude that the first popping event is likely induced by the dislocation nucleation. But, how about the second popping and third popping? So, I think this is a metastable austenite phase. So, these two poppings may be related to the martensite transformation. Okay? So, how do I check that? So, this is an initially austenite phase. You can see, then you can see the after non-indentation, the very clear indentation mark here. Then, by using the focal style beam then, we prepare this kind of the TM specimen then, we observe the TM microstructure like this. After that, so underneath the indenter then, so we observe the hard martensite phase. And also, we observe the gamma austenite phase remains. So, it is two phases are in the chaos of orientation relationships. That means, this alpha prime martensite was transformed to a gamma austenite phase, I think. This, so I confirm that the martensite phase after the transformation initially austenite phase. So, as you know, the martensite is a hard phase than austenite. So, this is another softening process, hardening process. Okay? Or on the point of view of just the mechanical property data. Okay? But, so this is a the bain deformation schematic diagram. You can see, the bain deformation has one compressive axis and two tensile axis. According to the compressive axis, there is three bain variants like this. So, if the applied stress is federal to the G direction like this, then bain selection occurs in this variant and not the others. Then, large pormont deformation, large, large compressive strain was developed in the material. This can cause the geometric softening. This is very simple, simple assumption. Okay? So, I think the special variant selection can cause the geometric softening. Then, this can cause of nano-identation popping. From this very loop, assume so then, so we can easily calculate the popping that's by the martensite formation from the TM microstructure then. So, we can obtain 25 nanometer. This is very close to the 20 nanometer by the indentation measurement. Okay? But, this is a very rough calculation. So, as you know, the normal martensite transformation case, we must consider the invariant shear deformation as well as the bain deformation. So, we must consider the 24kS variant or more NW variant or something like that. Okay? So, to evaluate the exact precise for nano-identation popping that's, so we must consider this kind of the deformation tensor, total deformation tensor. Consist of the bain deformation tensor and invariant shear deformation tensor on BCC crystal coordinate system. After that, though, we can calculate the transformation strain tensile like this on single austenite grain. And also, we need appropriate the variant selection model for considering the interaction energy between the applied stress and the transformation strain. Then, so we can from this numerical or the mathematical approach then, we can evaluate the popping depth precisely. Okay? Okay. I'd like to show you the another interesting data, interesting popping data. This is another austenite grain, the same material. After the nano-identation, we can obtain the multiple popping event. So, I'd like to know that the microstructure changes after the nano-identation. So, also by using the focus on beam then TM technique, then we observe this kind of the microstructure. And also, here, so by using the automatic TM mapping technique A star, then we can obtain the phase map and orientation maps like this. This is an indentation point here. You can see the remaining austenite phase. And also, you can see the various multisperient with different orientation. So, I'd like to check the origin of this kind of the multisperient. Then, so we carried out crystal-flasked FEM for single austenite grain. Then, we obtained a stress state underneath the indentation, the very complex stress field. Then, so by using the WRL theory, then, we can determine the favorable variance selection. Then, after that, so we found that just the four variant matches to theoretical data. But, interesting is that the alpha-1 position is here. This position is just underneath the indentation. I think the first in that first martensite transformation occurs in this one. This is perfectly matches to the calculation data. But, this martensite occurs firstly and then this martensite is hard-faced and then this martensite act as a hard indentation. So, this can make a change in the stress field of this austenite grain. So, this can make a very complex martensite variant from the calculation, I think. So, this very different martensite variant can make multiple patterns during the non-identation. Okay, let's move on to the austenite-eternal martensite transformation. So, I use this kind of the material with high nitrogen. The stack-imported energy value of austenite is about 15 millijoule per meter square. For this steel with this stack-imported energy, it is known that the Y-martensite formation occurs at the initial stage of deformation. So, Y-martensite is made by the stacking fault on U1 plane every two layers. So, thermal Y-martensite has self-accommodated stacking. So, after the Y-martensite formation, there is no macroscopic strain. But, strain-induced Y-martensite has monopartial stacking then this can cause the large shear deformation. So, I think this large shear deformation can occur, can make a geometrical softening. So, these can make the small poppins during the non-identation. I'd like to check this. Okay, before the non-identation, we carry out the tensile test like this. After just 5% tensile deformation, then we obtain the austenite-orientation chip Y-martensite formation. And, just a 10% Y-martensite, 10% case, also we can observe the Y-martensite. But, 40% deformation, then so we observe the alpha-flammate site and the orientation relationships. Okay, this is the typical example of the load displacement curve after the non-identation of this grain. So, you can see the very small popping occurs at the initial stage of the formation. So, I'd like to check that. This, maybe this popping is related to the Y-martensite formation or not. So, by using the focus I and B and also Tm then, so we'd like to check the origin of this kind of the popping event. So, just underneath the indenta, we observe the alpha-flammate site. I think Tm. So, after the non-identation, just underneath the indentation part is was undertaken by the large deformation. So, I think the alpha-flammate site occurs. So, the lesion slicedly outside of the large deformation zone was found. Then, so we obtained a very small banded structure and by using the high resolution Tm then, so we confirmed that the Y-martensite formation. So, over 12 stacking fort. Okay, then, so from the analysis of the 12 balance, the balance selection of the 12 partial slit and also we can calculate the unit displacement for the one stacking fort to then. So, we obtained 0.08 nanometer the deformation along the shield, along the indentation direction then. So, we can consider that the over 12 Y-martensite and this can make a 2 or 3 nanometer non-identation popping. This is a very reasonable comparing to the experimental data. So, I think the initial stage of the popping is my related to the Y-martensite formation I think. Okay, and okay. So, this is the last part of the presentation. So, in the normal BCC steels, so from the after the nano indentation, we obtained quite large popping after the nano indentation. So, I'd like to check the other relationship between the nano indentation popping behavior and sharp yield drop of the material. As you know, the yield drop is one of the geometrical opening event, breaking a quarter of an atmosphere. Okay, so we used this kind of the phertics steel containing a carbon and nitrogen and we carry out the nano indentation like this over 100 nano indentation data. Then, before the nano indentation, we carry out the tensile test is like this. You can see the obvious yield drop here. So, just after 6% strain when reloaded after or not just right after unloading then yield drop disappeared. Okay, but after 30 hours strain aging then, so yield drop recovers. This is a very fundamental strain aging effect, but so I'd like to check these. If popping in ferrite is related to the yield drop, another spinamina must exist in the case of nano indentation. I'd like to check. Oh, sorry. Okay, we carry out the pre-strain nano indentation of the pre-strain material and strain-aged material at room temperature as you can see right after pre-strain that you can see the popping disappeared after 30 hours later after 3 weeks later. So, popping this reappeared and more frequently large popping appeared. That means, probability of popping increase with strain-aged time from this, the nano indentation popping is very closely related to the macro scale yield drop phenomenon. Okay, thank you very much for your time. Thank you very much. Thank you very much, Professor Han. Are there any questions from the floor at all? We've seen sometimes this popping occurred in the valve metals when you hit the austenite, you see the popping behavior. So, you always thought that it could be because of both dislocation activity and transformation. How would you differentiate this two? You had to go through this fib analysis to confirm that or how would you do that? So, we have both the phenomena, slip phenomena and also the transformation behavior. Yeah, so, you know that the normal, the plastic deformation for dislocation phenomena, so in nano indentation case, the first one is dislocation equation and next dislocation multiplication or dislocation source activation and dislocation multiplication and these two stress or two the load for this dislocation multiplication or this source activation is much smaller than dislocation nucleation. So, first initiation of the plastic deformation is related to the dislocation popping but the second popping or third popping, plastic deformation occurs. So, that means there are so many dislocations underneath the indentation. So, we must think on the different popping source. Okay. Right, right. Thank you for this interesting presentation. Did you try to correlate the values of the force displacement curves, the popscene and the actual energies of the events that you mentioned like nucleation, glide and so on. Because this could be very useful input for crystal plus these demodulers. Yeah, so actually, so in my laboratory, so I did the development of the CPF and code and so the starting point, the motivation of this nano indentation work is the obtain an intrinsic property of this mechanical property but so we must consider the very complex mechanical behavior. I mean that the indentation size, that means that geometrically necessary dislocation and also the popping behavior so the very complex at this moment. So, I want to obtain that the appropriate input data for CPF PM but at this moment it's not easy to evaluate that this kind of the material parameters. Thank you. This is future work. A question from Ji Hong Kang. Since the area of force displacement curve can be considered as a work or energy, do you think it will be possible to quantitatively determine the different type of transformation energy by this popping effect? Would you read again please? So, since the area of force displacement under the curve can be considered as a work or energy, do you think that it will be possible to quantitatively determine the different type of transformation energy of this popping effect? I don't know exactly the transformation energy so anyway for the valiant selection then so we must consider the mechanical interaction energy between the external pressure and the strain, the transformation strain. So, I think the 40, 20 per valiant then so I think the same chemical free energy from austenite to martensite so I don't know understand but so anyway so we are supposed to have a dinner with Ji Hong Kang then so I'll talk about that. Yeah, she got another question about recluse tragedy so she asked about do you see any recluse tragedy during Nado indentation? Recluse? Recluse tragedy so recluse tragedy is a kind of the thermal reactivase process except this dynamic recluse tragedy but so during the Nado indentation I cannot induce the dynamic recluse tragedy so it's not easy to obtain the... Okay, thank you. Thank you. I wanted to get at the relationship between discontinuous yielding intention and the pop-in in nano compression. Yeah. So how good is so we have steels that have continuous yielding and discontinuous yielding and then discontinuous yielding with variations in the amount in a macroscopic tension test. How good is the correspondence between what you would measure in nano indentation to what you would predict in tension when you look at that spectrum of yield point elongations could you take a very small specimen of a macroscopic material and predict and understand its yield point elongation. Yeah, so that's a good question. So I'd like to do that so I tried it but in nano indentation is just one point data but the tension test on macroscopic data so it's not easy to correlate this between the two mechanical property but the only thing is there the the relationship between them is valid but the quantitatively the matches it's not easy I think at this moment I'm sorry. Final question. In fact I have exactly the same question because we always try to use a weakest hardness and to relate the yield strength and of course through the nano-intenter we also try to get some data and for the nanoparticle presentation in ferrite we also try to get some information in order to to find out the deformation behavior so but in your case you have a uniform structure compared with the main nano carbide in ferrite structure so so maybe we can relate the nano-intenter hardness data to the yield strength. Do we have any idea to how to convert to the yield strength through the basicly through the mathematical calculation? Yeah but at this moment no but so so normally in advanced the high strength steels there contains a barrier structure in the various phases so by using nano indentation that we can measure the intrinsic property for each phase and we can compare two properties so it's reasonable I think but macroscopic value between the macroscopic value and nano value is quite big difference at this moment but so we can compare the two-phase or three-phase as a mechanical property in nano scale. Okay I think we're gonna have to leave there if we could just thank our speaker again.
A lecture given by Heung Nam Han, at the Adventures in the Physical Metallurgy of Steels (APMS) conference held in Cambridge University. About the nanoindentation of steel, and the pop-in effect, variant selection. Nano-indentation is an outstanding method to probe small-scale mechanical properties, which are relevant to a wide range of materials and applications. The response of a material to nano-indentation is usually presented in the form of a loadâ€"displacement curve. It is known that nano-indentation pop-in is a sudden displacement excursion on the load-displacement curve during load-controlled indentation. The sources of pop-in might be basically geometrical softening behaviors. In this study, several physical events which cause pop-ins during nano-indentation of steel alloys will be discussed. First, we consider the onset of plasticity resulting from dislocation nucleation or dislocation source activation in ferritic steel, which can produce the geometrical softening in the early stage of mechanical contact during nano-indentation. The effect of strain aging on the nano-indentation pop-in is observed and compared to the well-known macro-scale yield drop in tensile test. Second, both strain-induced alpha prime and epsilon martensitic transformations of metastable austenite are investigated by nano-indentation of individual austenite grains in multi-phase steels. The pop-ins are described as resulting from the geometrical softening due to the selection of favorable variants of alpha prime martensite and partial dislocation for epsilon martensite, respectively.
10.5446/18607 (DOI)
The next talk is by Professor JR Yang. He's a leader of steel research in Taiwan as a whole and really does an enormous amount of work on steel both through the CBMM Center that he has created over there and in the whole of the Department of Materials which he led for many many years. So yeah. Ladies and gentlemen, first of all I would like to send Harry the team of APS for such wonderful organization so that I can meet you here. And Harry gave me this topic and I feel very satisfied because it's good to understand very very special phenomena. In fact secondary hardening commonly happen in many sites of steel containing high strong carbon element like chlorine, vanadium, nubbin and moly. And here I would like to report to you emphasize that secondary hardening also happen in low carbon benefit steel. So let's look at how to produce low carbon benefit steel through careful chemical alloying addition and also we need control the rolling and also a saturated cooling. We can get high quantities of benign easily in low carbon steel and ideally the low carbon benefit steel provide high toughness and high strength and of course good weldability. So we expected this steel is quite useful for automobile application and that's my subject because we have a cooperation with the CBMM company and CBMM company always push us to do the research related with steel application for automobile. And here I would like to show you the typical micro structure of low carbon benefit steel from our research result. In fact people always said always said it's a granular band line and then when we check Harry's book we know Harry has has written a beautiful review for granular band line and from that I knew that there's a paper published in 1950 by Harprican. Is that right? Yes sir. And he said the coast favorite place look like granular. But I would like to raise question here. Can this terminology granular band line signify the exact structure? Can anybody answer me? Can anybody answer me? Of course we need TEM to clarify the detail micro structure so I will give you the answer now. So we should consider the terminology of granular band line. It's long but in this talk I will keep this terminology because it's quite convenient to communicate with the people. So here we can see the subunit Benedict Ferrara here and the thickness about 200 nanometer so it's so thin and also the subunit with the same orientation so the coast Ferrara prey in fact is composed the favorite subunit and because the subunit with the same orientation and we couldn't distinguish the detail. And in my research we would like to develop the low carbon Benet steel and the China steel has prepared three steel for us and for the for these three steel we have the same composed base composed. I would like to show it here. The best composition is 0.05 carbon and magnet 1.7 magnet and niobin 0.08. Best chemical composition. For the steel one the niobin we label niobin steel because it's without morning and for the steel two with 0.1 morning and we label this year niobin morning steel. For the third steel we have 0.1 percent morning and we label this year niobin three morning. China steel prepared this three steel for us and cast into an ingot and after homogenization at 1200 degrees C for two hours the steel the ingot were hot loaded and by control low the rolling rate about 20 percent per pass and the finish loading temperature around 900 degrees C and here we can have a 5 millimeter thick strip so the reduction rate is quite high so we can imagine the grand refine also happen here. And after finish loading the strip were treated by a serrated cooling to 650 degrees C or 550 degrees C or 450 degrees C for 10 minutes for fan line transformation. So here we can have seven strip at room temperature. I would like to emphasize here because we will look at the micro charger level. Niobin 650 and that's get from the isosomal transformation at 650 degrees C and niobin morning 650 and also we have niobin 550 niobin morning 550 and niobin 455 niobin morning 455 and niobin 3455 we have seven strips. These strips were receded at 600 degrees C for different time interval from 0.5 to 8 hour to investigate whether the second didn't happen. I think the metrology data is very important. We need that because we would like to correlate it to the property. Here I would like to show you the typical structure of the so-called grand unit band line and we can see this is MA, it's martensite austenite, and it's quite easy to distinguish from MA and degenerated peri from the morphology. And here is a look at the grand unit structure, it's a grand unit band line and here is a ferrite. In fact it's very difficult to distinguish between the grand unit band line and allotmophic ferrite. So we try our every effort to distinguish. First we can see the use weaker hardness to distinguish and second we try to eat EBSD can be to distinguish. And this is our result here. This SEM for Nalbin 450 strip we have 25, 40% of the tome ferrite and 63, granular band line and 9% martensite austenite phases and 3% degenerate peri. And indeed it's very very important to have this quantitative data in order to understand the mechanical behavior. Here I would like to show you the EBS technique to understand this technique how to use it to distinguish ferrite and band line. Here is an example to show you the misorientation measurement between the subunit in granular band line. This is sub-ran A and we can get kikuchi pattern here and this is sub-ran B we can get kikuchi pattern here and in fact through the soil oil we can easily get the misorientation angle. But precisely we try to describe the misorientation by a 6 angle pair. So from the sub-ran A we can have Euler angle here and from the sub-ran B we can have Euler angle here but in fact we have reference crystal so we can get the sub-ran A relative to the reference crystal orientation matrix. And also we can get sub-ran B relative to the reference crystal so we can have these two basic orientation relationship. And then we can get the sub-ran A and sub-ran B notation matrix. So here I would like to show you the commercial data always show us this angle. Why? Because we have 24 axis angle pair and the commercial data always choose the lowest angle so when we use this software we must consider how the software engineering has done for us because we should understand the crystal-crystallic-of-meaning and in fact by using the the minima angle is quite easy to interpret but we should understand the physical meaning. Here I would like to show you two examples. For the BAN line we can have the distributive misorientation angle something like this and we have two pick here this is low angle and this is angle for 60 degrees C maybe somebody understand why it located at 60 degrees C 60 degree and for the fair right we can see the misorientation angle always a high angle so from the intensity here is very easily to distinguish BAN line and fair right region. So based on this concept and technique we quantitatively estimate the data for the strip. Nioven 450 strip and Nioven morey 450 strip. Here we consider 18 0.1 morey we can have a higher high the bitty of BAN line so the gradient of BAN line from 60 3 percent, volume percent increase to 66 percent and the fair right decrease from 25 volume percent to 14 percent so the multi addition effect is quite significant. On the other hand we can check this Nioven 550 strip and Nioven morey 550 by 18 0.1 morey we can see 0 gradient of BAN line increase to 62 gradient of BAN line. Here now the 5D strip contain 87 volume percent fair right and here is 16 volume percent fair right so basically the structure a completely different so it's very important thing we study the mechanical property we always to wish get the metacor the quantitative data in order to understand what's the mechanical behavior so that we can control the microstructure and the mechanical behavior we have both. Here I would like to show you the strength curve for the Nioven morey 550 strip and for the Nioven morey 450 strip because the structure are very similar you can see granular BAN line 62 66 or Tommi favorite 16 14 and MA 21 19 and 1 percent degenerate and 1 percent degenerate so you can see the curve look very similar and for this to strip Nioven 550 and Nioven 455 we can see this Nioven 550 contain 0 BAN line and this curve show there's a sharp ear point here because 87 percent morey in this in this steel and as to Nioven 455 we can see there is a Plato slightly Plato here because this steel contains 25 or Tommi favorite and let's compare Nioven 550 and Nioven morey 50 after template for different type interval this strip contain 0 granular BAN line so they there is no secondary hardening can be detected however for Nioven morey 550 we can see after one hour the hardening raises up quickly and there's a very strong peak occur on the after 10 after two hour temporary here also we show Nioven morey 455 Nioven 450 contain granular BAN line 66 percent and here 63 percent and we have we can detect it the second hardening after one hour temporary at 600 degree C and let's look at these two strength sake curve here okay and this curve show Nioven 65 without granular BAN line and this curve contain Nioven 63 granular BAN line and after 10 minutes one hour we expected this curve will raise up I can show you quickly here so it's quite significant the second it's hardening occurred in bay native favorite and the the micro search development seems very important so I would like to show you the the data quickly sorry okay since something wrong these two they are we couldn't couldn't show so I don't know the reason but here I would like to show you from the EBSD we can show the peak is quite significant for the now be in three morning morning because they are not a very new Bella and also we study the dislocation density for my bin 450 my bin morning 450 and my bin stream 450 and after 10 be one hour and and eight hour we can see that is okay basic doesn't change so it means the structure the structure property the disorganic structure property is quite stable for this alloy and also we use high resource 10 tm to observe the carbide and we can see the carbide always precipitate on the dislocation and it's very very tiny and the carbide always with the MC structure and this is Baker and knocking orientation relationship and also for now be more is still we can see the MC car back and Baker and new team orientation relationship and here we show the dark field image because of the car by its very tiny and we would like to observe the car by density and we try to use a traditional darkwood to observe the density and in fact the car is very very tiny and from the high ocean tm we measure the size of car by and the number of car by we we measured about 300 particle and we can see the car by size keeps very very small for now being and I'll be more steel and from the from EBSD at EBSD network we also would like to observe the chemistry for the car by and here is the data and for Cambrian at 600 degrees C for eight hour the now be my more atomic ratio is about 1.8 for example and for now be three more for 55 and the atomic ratio about one over one and finally I would like to show you the data quickly and here is quite safe naked we can have the signal hardly after a short time temporary and in fact for in that application we should consider the eos chance and tensile chance and also elongation so after one hour temporary you can see the eos chance and tensile chance increase and also the elongation also increase so it's very good to use this material for in that application because if we if we can choose the proper proper the processing and after benign forming and we hit it just for a short time and the strength and elongation both increase finally I would like to give a brief conclusion in my in our low carbon now be kind and steel and we find the Molly addition as a addition has a advantage of produce high ball interaction or granny or benign and it's good for second-term in housing during tempering and I think it's very important for industrial application and especially consider the low carbon and lower steel thank you for your attention excellent lecture thank you what is remarkable is the stability yes the structure yes and even with tempering at 600 you maintain the strength the dislocation density yes and increase ductility yes yes so the idea would be that the 600 degrees C happens during coiling yes exactly you have a lot of MAEC also so is it carbides are forming in granular brayland or in those MAEC how would you differentiate this okay it's way easy because we can we can see in the band-line region we can see the subunit as I show you and it's a sub-grand and it's completely different from the MAEF as a stereo so even after tempering also even after tempering also because tempered modern side would look the similar also okay tempering modern side always give you coarse carbide always gives you coarse carbide we can discuss later why did you use for example niobium and molybdemon and what about vanadium which is known is very effective as for hardening for example in high-speed steels okay and the second question is why did you use for example only tempering at the temperature of 600 degrees okay thank you very much yes for this alarming design in fact we we should keep we should have high mode inflation of bed so how to get high mode inflation of bed and I think the now be in a moly pay an important low what a hardening of bed so in this work we we add now being the label is 0.08 percent that's the reason we use my working instead of because we would like to increase the 150 of them and your second question is okay in fact because because of the time limited I couldn't show my result for 500 degrees C and 550 and 650 but I choose the best one 600 we have question from David Haudhig he asked in experiment procedure okay you refer the AC egg ruling the set egg ruling from a five mil thick strip or commercial simulation after I saw so much transformation at they say 450 degree C and the core the strip cool natural cool to room temperature we suppose the 10 I already there already there so we don't need to use fast cooling at the final stage is that okay and another question from Suje you say something about niobium increase the other bleedy yes and you form a granular pay night yes and how do you separate the ferrite and granular pay night that's that's the question okay because it's a low carbon steel so it's very difficult to get completely complete very nice structure so we always have us we also have some favorite so there's a reason we need a separate to the cool and also we need micro-aluminum addition so that parameter is very important for us so we just trying to prepare this material but in the question is saying how do you distinguish the ferrite from the bay night I think you answered it in your presentation this way is very easy just by because how we can distinguish because happens it's very easy and also through EBSC we can distinguish EBSC someone should comment on the fact that niobium and molybdenum dissolve in the same carbide whereas we know that doesn't happen with titanium and molybdenum do we know whether molybdenum should be soluble in niobium carbide yes I know it because we used for example niobium in high speed steels and complex carbides are performed which contains for example both the elements when you when we did the first for this calculation on the name of carbide with molybdenum and the energetically this not favorable for the name of carbide to contain the molybdenum in the molybdenum titanium molybdenum we found that the some addition of the moly molybdenum in the titanium carbide will relax to the strain energy interface strainage between the ferrite and titanium carbide but it is not clear that the same mechanism control the molybdenum accommodation in the niobium carbide. So I think you know the chemical analysis here is quite interesting of the carbide and very clear your microscopy and time for analysis. In fact this results no good because we need 3D data to understand if we just use it that's a problem we couldn't detect the exact composition for this carbide right because some carbide we always we can see some co-share structure. But there's no doubt that you get both peaks. My question is because we just temper for one hour and we can increase the stress and also increase the elongation right so we suppose the niobium the natal carbide precipitation occurs in the short term. How much niobium you have already precipitated before tempering and how effective is niobium that participate in secondary hardening. Okay I think that's a very very good question. We need very good data to perform this research in the near future. Not now because I get the specimen from industry and we need theoretical work so we need very good data to perform this experiment. We couldn't get you answer at this moment. Short question first thanks for excellent talk and it's a very difficult story that you show here how to characterize the microstructure. In fact I have questions about EBSD data. You separate between ferrite and bainite here and you use misorientation profiles. Well have you ever tried to to use not misorientation profiles but kernel average misorientation of some other type of maps in EBSD as far as both structural constituents they have different dislocation density probably better revealed there. Different criteria because yeah or there's a recent version of some software you could use even kernel map on the image quality so do you have some some experience with that. Okay I think the chemicality is a good way and also we should consider the digital space safety effect. However I think there are several measures we can we can consider which one is the best one and in fact we have used image quality to distinguish the MA faces it's quite easy because it becomes dark. And basically we know the bainite with the high digital band density so some people we can see the image quality but in fact we still can see in our results so we basically I suppose the step size is very important. In fact or this granular bainite the auto-minefair is huge much bigger than the bainite stability so so so we can distinguish it easy. I'd like to confirm that we've also we've also measured that I'm in a number of niobium carbides. Your your presentation showed very clearly that you have secondary hardening at 600 degrees and I guess that's probably the peak of the hardening. My question is what about carbide formation strengthening carbide formation in the bainitic microstructure itself. So it's obviously incomplete because you can you can increase the strength by a secondary heat treatment but can you tell us anything about the extent of precipitation during or after the bainitic transformation itself in the primary cooling stage. Okay that's that's really a very good question because we have compared both microstructure and we suppose there's a little bit difference. Also we can use a weaker hardening to test first so compare with weaker hardening. We can see the clear difference. Okay okay I think that Rocky answers your question because we have to go for lunch. Yeah the college. Yeah here is the answer. So just compare the bigger sign for the original one and the temporal one. Yeah thank you very very much. Really glad to hear that.
A lecture given by Jer Ren Yang, at the Adventures in the Physical Metallurgy of Steels (APMS) conference held in Cambridge University. The metallurgy of a new, microalloyed bainitic steel that is capable of secondary hardening, accompanied by a simultaneous increase in strength and ductility is introduced. Ideally, a low carbon bainitic microstructure offers an excellent combination of good toughness, strength and weldability. The typical microstructure of low-carbon bainitic steels is composed of a fine substructured bainitic ferrite matrix with certain amounts of uniformly distributed carbon-rich second phases. These second phases, located among the sheaves of bainitc ferrite, consist basically of martensite/austenite (M/A) constituents. As a result of the low-angle character of boundaries of bainitic ferrite sub-unit within the sheaf structure, little or no evidence of ferrite boundaries could be detected by an optical microscope. It is worth further improving appreciation of the transformation and to evaluate the effect of substructure characteristics on the properties. The main purpose of this work was to investigate the effect of Mo addition on the development of microstructure in the hot-rolled low-carbon Nb-containing bainitic steels. The steel strips were fabricated by the combined processes of controlled-rolling and accelerated-cooling. Microstructural characterisation and mechanical testing for the corresponding strips have been investigated. The results show that the Mo addition has the advantage of producing a high volume fraction of bainite, which possesses a significant secondary hardening after tempering treatment. It is suggested that the secondary hardening effect provides an additional way to increase the strength of Nb-Mo-containing bainitic steels.
10.5446/18605 (DOI)
The next talk is by John Speer, who comes from the Colorado School of Mines, which is the main center for Steves research in the USA. And John Speer is one of the leaders there. He's going to talk about the quenching partitioning process, which started, you know, fairly recently and has taken off all over the world. The number of papers you see and the quenching partitioning process being tried out in many different ways is really impressive. Thank you, Harry, for your kind introduction. Can everyone hear me okay? Okay. Well, it's really a great pleasure to be here today with all of you. And I'm going to try to talk a little, in the short time we have a little bit of science and a little technology. I want to acknowledge my collaborators and co-authors, David Edmonds, who's in the room here. Our acknowledgments were supposed to be on the first slide. But I thought, as I thought back on the development of this concept and all the process, one of the great pleasures, I think, was we often don't have the time or take the time to read as much as we should. But I did have an opportunity to read a lot of the literature from the giants, and that was quite an enjoyable process. I really want to acknowledge those people who have influenced probably all of us in the room here. And I also want to thank the many collaborators and students that have worked on this over the years. A little bit of background going back to the beginning. So this quenching partitioning process was really designed as a new concept to control retained austenite. And the original process concept was that we interrupt a quench. So normally you would quench austenite, perhaps to room temperature and form martensite. But we interrupt the quench here at a temperature where the martensitic transformation is incomplete. And the idea then is some subsequent thermal treatment either at the same temperature as the quenching temperature that we called one step or at some different temperature or two step, that carbon would go from the martensite into the untransformed austenite and would therefore stabilize it so that when we then completed the quench to back down to room temperature we would have more retained austenite. So that's the basic concept. We've extended that concept more recently to the case where we may have industrial processing concepts where the partitioning process would be non-isothermal. And if we have time at the end I might make a comment or two about that. This is a little bit more complicated. But for example, in hot roll sheet production you would use the run out table cooling process to complete the partial martensitic transformation. And then the coiling temperature where you wind the coil up would serve to control the basically the time and temperature, the cooling profile that would define the extent of partitioning. So these are the sort of process concepts that we're thinking about. In the beginning we started by trying to understand what the thermodynamics would tell us about what kind of carbon partitioning could happen and how it could affect the microstructure and properties of the material. And so this is some of the simple analysis from our early papers. And again if you think about a metastable equilibrium between martensite and austenite carbon would partition to this point in equilibrium. In the case where we have varying non-equilibrium fractions of austenite and martensite though if we analyze this and if the interface is immobile what would happen is the carbon would partition until its chemical potential is uniform in the two phases. That's why we have not a common tangent construction but a point where the tangent intercepts on the carbon axis are the same. And the interesting thing about this then is depending on the phase fractions we could have multiple different conditions where the carbon potential is their equal in the phases. And so we could have very carbon enriched austenite much more than an equilibrium or less carbon enriched austenite. So depending on the phase fractions we could have some interesting carbon enrichments. And that went into the early development. And then the next step of the process was to try to understand how we would control microstructure and in that regard this is an important diagram from our early literature and so I want to just walk you through it for a moment. So if we think about quenching austenite so once the quench temperature goes below the martensite start temperature the amount of austenite that remains is diminishing, the amount of martensite that forms during the quenching is increasing with under cooling. And then so we stop at that quench temperature and then we partition the carbon and most of the carbon would like to partition back into the austenite. And so depending on how much austenite and martensite are present that defines the amount of carbon that can partition into the austenite. And so this line right here tells you the carbon concentration of the austenite if the martensite gives up all its carbon to that austenite. And so if we have a lot of martensite and just a little bit of austenite the carbon enrichment of that austenite is very great and diminishes with reduced martensite content and increased austenite. And so this tells you about the stability of the austenite at the quench temperature after partitioning before we go through the final quench to room temperature. And then depending on that stability this line tells you how much of that austenite that existed after partitioning will remain after final quenching. So this is the amount of that austenite that transforms to new martensite and so if we subtract that from the austenite that was present at the quench temperature we get this red line and that's the end result of this that tells us how much austenite we could retain at room temperature. And so this functional behavior was very important helping to guide our processing histories as we tried to verify that this concept would work. So we have this peak in the behavior associated with particular quenching temperatures. And in this fairly simple model were some assumptions that are pretty important from a physical metallurgy standpoint. First of all that we had ideal partitioning and all the carbon would like to go into the austenite that we've completely suppressed the precipitation of carbides or conventional tempering reactions that the once you form the martensite during quenching that you don't change the phase fractions anymore that is that the interface is immobile and that the austenite doesn't decompose in other ways like bayonet formation. So there have been these assumptions are not always correct and so they're the source of a lot of interesting follow-up that we can do as a metallurgical community. But still the model is very helpful to guide us. Now this particular model that I described here was actually applied more recently in a completely different class of steels the so-called medium manganese steels that are very fine grained intercritically annealed materials. So fine ferrite austenite mixtures where the austenite is really stabilized by high manganese concentrations. I'll just show you how this diagram was applied in that case it's basically the same kind of behavior except in this case the ferrite austenite fraction is controlled by the annealing temperature so the greater is the annealing temperature the more austenite we have. And then if manganese can partition into that austenite the austenite is enriched with manganese but it really depends on the phase fraction so the more austenite that you have the less manganese enrichment that you have and so the less stable is that austenite in terms of remaining at room temperature and so depending on its stability it can transform to martensite during the final quenching. If we again subtract this from the austenite curve we end up with this black and blue function that tells us how much austenite would remain at room temperature. So a very similar application of the same fundamentals to a completely different class of steels and this is actually how it worked out so this is a 7 manganese I think 0.1 carbon steel. This is the predicted austenite fraction as a function of annealing temperature again assuming full manganese partitioning between the phases this was a long annealing time and these are the experimental data showing that this model was helpful to understand the behavior of this other class of materials. So it's a very exciting time in steel development now I think I'll go on the record and say this might be one of the most exciting times for steel development ever. In the automotive community the need for increased fuel economy is dramatic and at least in the United States there's a tremendous need for steel development to reduce weight and enhance vehicle performance or maintain vehicle performance. So in terms of application of quenching and partitioning the automotive industry right now is driving that interest in application but there are other interests in ball bearings and high toughness plate steels so there's much opportunity that hasn't been explored yet in quenching and partitioning. But in the automotive industry so we saw this so-called banana diagram before where we have this range of tensile ductilities and tensile strength for a whole variety of different kinds of steel materials and I heard this described interestingly very recently by Neil Suchdev of General Motors as the miracle of steel. So we look at this somewhat mundane as experts in steel but the fact is by small variations in composition and processing we can create this tremendous window of different steel products with very different properties so the miracle of steel. But we're trying to push the properties up into higher strengths and higher ductilities so that's very challenging and yet very exciting for the community. How are we going to get there? So this is a simple model developed by my colleague David Matlock really looking at composite models for predicting uniform ductility based on phase properties and you can see if we look at combinations of ferrite and martensite we get property combinations that you would expect that are fairly parallel to the different grades on the banana diagram so ferrite by steels. If instead we look at combinations of a particular stable austenite mixed with martensite we get much better strength ductility combinations in this future need area of steels and actually if instead of using stable austenite we can control the austenite stability I don't think we really know how to tune the austenite stability in practice as well as we would like to but we can we can move this curve around as well so it's a very interesting opportunity. But in terms of how we get to the future our philosophy in automotive steel development is pretty much we need to have retained austenite in the microstructure and so that's driving interest in Q and P steels, in carbide free bainite steels, in medium manganese steels as I mentioned so very exciting time. There are a number of things on this slide that I want to mention. First of all this is all this data is all experimental data from quenching partition steels and some of the people who generated it are in the room here and this green, this shaded region over here. This represents the original target that we set out to achieve in 2003 and so we were pretty happy that our development was fairly successful in getting us some high strength materials with quite good ductility. The dashed line and the solid line here are the same predicted curves that I showed on the last slide. So we were quite happy but what's happened in the meantime is that the property targets keep getting more challenging. So one of the large automotive companies in 2010 defined a target up here which is way up in the future band of desired properties and then the next year put some targets up here. The United States Department of Energy has funded an integrated computational materials engineering program with the industry and they've set some even greater targets than the industry has set. So when we think about how successful we are in meeting property targets we also have to recognize that the challenge is increasing. So the targets are moving. I promise this is my last banana diagram for this presentation. Our original models looked at transformation behavior and subsequent models looked incorporated some partitioning kinetics and so these are some different models. We can see that the ferrite gives up its carbon rather quickly. It takes longer to equilibrate the austenite. So what's interesting about that is under certain partitioning scenarios the austenite might contain most of the carbon but with a non-uniform chemical composition gradient. We get some interesting effects then when we try to understand the stability of that austenite so we calculate it using the Køsten and Marburg equation locally but in fact we don't know how good that assumption is. So one of the questions for the community then is what is the stability of austenite in the case where we have a local concentration gradient that's on the same scale as the martensite microstructure. These are some examples so quenching partitioning has been applied now in commercial steels. First by Baosteel in China. These are commercial applications. There are other companies around the world though who I believe have a real interest in considering this technology. So in the few minutes that remain for my presentation I thought I'd present some curiosities, challenges and opportunities. We've learned a lot over the last ten years but there's still a lot of things that we don't understand and hopefully this community will come back at some future time and help. So here are Q&P property data. So this is the product of tensile strength and elongation versus the amount of retained austenite. And so our desire was to produce high amounts of retained austenite but you can see from this diagram that in fact the properties are not highly correlated with the fraction of retained austenite. So I think that we still don't completely understand what controls the work hardening behavior and the property combinations in these steels. We've learned a lot of partitioning mechanisms and some of the physical modelergy and the group that leads has led us in this regard. One of the questions is about carbide precipitation which generally speaking we don't want because it takes carbon from the microstructure that we would otherwise use for austenite stabilization. I'm showing a particular steel that's partitioned at two different temperatures. In one case we get a lot of epsilon transition carbides in the microstructure and the other case at higher temperature we get austenite stabilization. So for the community I think one of the challenges then is how do we control the stability of transition carbides other than perhaps temperature according to the models that we think we understand. But if we could turn on and turn off transition carbide formation using other means then we understand now it would be a powerful alloy design tool. The last comment I want to make we've had a lot of discussion about whether the Martensite austenite interface is stationary or mobile. There's been some interesting modeling work that's been done at Delft in particular. One of my former students Grant Thomas looked at some higher alloyed steels not intended for commercial applications but intended to study the partitioning mechanisms. One of the things that we looked at is the change in the austenite fraction during partitioning in these steels where austenite you could quench to room temperature and then partition subsequently. In a high nickel containing steel we found that the austenite fraction was stable it did enrich in carbon. So the assumption of a stationary interface was pretty good. In the case of a high manganese steel though we actually increased the austenite fraction during partitioning so clearly we think the interface was not immobile. EBSD results so the green is the austenite phase the change from left to right is with partitioning and you could see again nickel steel increase in austenite fraction nickel steel approximately constant austenite fraction. And so I mean there's pole figure diagrams really looking at what happens to the microstructure during partitioning. This is in the manganese steel these colors represent austenite orientations and so these we think are probably the original austenite grain orientations confirming that we think the austenite is growing during partitioning rather than nucleating. But we don't completely understand why that happens in one steel and not another. I think I'll bypass this slide and go to the conclusion. Quenching and partitioning science and technology continues to advance. The process has been commercialized and hopefully there will be other applications besides automotive sheet steels and growth in those applications. But challenges and opportunities remain both in terms of science as well as in technology. And I end in closing here showing some Q and P microstructures. This is one that we obtained in the laboratory intercritically annealed so we have ferrite here. This is actually a commercially produced Q and P steel. And then we have a mixture of martensite and thick austenite films. So with that I conclude my presentation and again thank you for being here. Thank you very much John for creating one of the modern concepts of automotive steels. Very interesting talk. We are open for questions. Very short question. In fact you mentioned that bay night is something that we don't want in pistils. But could you comment a little bit more about that? I understand that we don't want bay night because it takes part of the carbon. Well the viewpoint of mechanical properties I expect that maybe people should go in this direction to sort of combining some bay night with. Could you. I don't remember saying that you don't want bay night. But I think that when bay night forms from the austenite at the partitioning temperature then you really have a mixed microstructure. So you have a Q and P mechanism but you also have a you know austempering bay night formation mechanism. So you get a mixed microstructure. And I think there are some interesting properties that people are getting in cases where they do have those kinds of mixed microstructures. So hybrid mechanisms. So I think they might actually be fairly important industrially. So yes. Sanke is a very nice talk. You mentioned these medium magnetic steels. That's probably a very hot topic at the moment. And apart from manganese what's your opinion on the range of other elements such like aluminum or silicon? Well I think there are some aluminum and silicon wouldn't be strong austenite stabilizers. So you'd be looking at different concepts with those elements. So manganese is interesting because it allows us to really increase the retained austenite fractions to quite high levels. There are some other interesting concepts particularly with high aluminum where aluminum is being used to reduce the density of the steel. So those are different concepts that are a little bit outside the scope of the design concepts that I discussed today but are also of considerable interest right now. Okay. Thank you for your talk. I have a question regarding the resource from Thomas on the high nickel and high manganese steels in which he observes an increase in the fraction of austenite with the partitioning step. And I wonder if I know that at those temperatures the diffusion manganese is extremely small but I also saw some articles from another researchers in which they think that the diffusion of manganese is underestimated at low temperatures. So do you think this increase in the austenite fraction can be due to some manganese partitioning actually? So I don't know the answer to that exactly. This is a very low temperature to be thinking about manganese partitioning but I do agree not necessarily in this temperature regime but actually in the temperature regime of annealing of the medium manganese steels, so in the intercritical regime, we're getting much more manganese partitioning than you would expect from the sort of published diffusivity data. So I agree that manganese diffusion seems to be much faster than we thought it was. I don't know that that's a contributor to the temperatures that Grant Thomas was looking at here though. Because it's interesting when you have lower manganese levels, you don't see those or at least we don't observe such an increase in the fraction of austenite but at those levels it's observed so that's maybe related. So what's happening in the nickel steel then? Okay, well thank you. Okay, thank you. We have a question from the rest of the world. Not from the Netherlands, they have a question. The question is if the intercritical annealing temperature has been done in say 800 to 900, is it possible for the manganese to partition in say 100 seconds? That's the question. So are we talking about a medium manganese steel? I guess so. So I guess it doesn't really matter. But I think there's some data in the recent literature that shows that you can get significant manganese partitioning at intercritical temperatures in relatively short times. Another question from AK Steel. Is it possible for manganese carbon interaction is different compared to carbon interaction to nickel effect these? So this author is actually at AK Steel. So I'm not asking the answer to this. So is it possible that there's a carbon-nickel interaction that's different than a carbon-manganese interaction? I haven't thought about that. This might not be a good venue to be thinking out loud. John, I wonder if those unexplained experimental evidence that you saw may be related to carbon trap in some places that you have no mention and should be considered such as dislocations, twins, or other interfaces because quenching the steel before that isothermal edging, you create a high dislocation density. And I wonder if the carbon will be really comfortable on those spaces before partitioning to the austenite. Then maybe you are losing carbon in the process of carbon partitioning that cannot rule out the carbon balance that you are considering for the way of retaining the austenite at the end. And also what we have observed in carbide-free, by nitric steels is that for subsequent low temperature tempering that around the temperature that we get the epsilon carbides that you mentioned before, those dislocations segregated in carbon can be a key issue because with time and temperature, cultural atmospheres will evolve what we have seen in clusters and after that that will be the perfect nucleation site for an epsilon carbide. Then I wonder if that you investigate your dislocation densities and the carbon trap at this location, you can explain the appearance or not of epsilon carbide or more or less retaining austenite at the end in your microchurch. So that was a complicated question. But I think part of the core of that question was what's the role of carbon trapping in Q&P? And I think carbon trapping should be important and we've seen a lot of, when we have done carbon balances, we have seen instances where we're obviously not getting all the carbon out into the austenite. And if we could reach the kinds of carbon enrichments that we predict from our thermodynamics, that is if we could turn off the competing mechanisms, we could really get some interesting carbon enrichment. So carbon trapping is part of that. Although in order to study that, we've done some atom probate, to study it you also have to understand the tempering reaction and carbide precipitation. So that's all challenging and we haven't completed that kind of work yet. Yeah, according to your dictural calculation, the diffusion of carbon is still slow and the diffusion distance is limited. Is there any desirable size of retained austenite and how can we control the size of retained austenite? So, how do we control the size of the retained austenite? Maybe the size should be limited to stabilize the retained austenite. So we haven't studied experimentally ourselves yet the influence of the starting austenite grain size and morphology extensively, but I have heard that there are some people who are working on some micro-alloyed versions of quenching and partitioning where they've maybe refined the austenite and gotten some better mechanical properties. Right, the size of retained austenite between the mountain side. Right, but we really, once you make the austenite and then quench it, so then the austenite size and morphology is controlled at that point and you're moving the carbon around. Carbon moves rather fast, so you can decarbarize the martensite pretty quickly. But if you want to change the austenite size and morphology, you need to change the starting microstructure then. And at that part we haven't gotten to yet. In the medium manganese, you mentioned that the partitioning of manganese is quite critical to stabilize the retained austenite, but I think that there is also contribution from the refined grain. So what do you think about that? So what's the importance of grain refinement in those deals? So grain refinement is going to help stabilize the austenite and help increase the strength as well. So that's a critical aspect of the microstructure in those medium manganese steels that you have very fine ferrite and austenite. I agree. Thank you. Thank you. Thanks, Hank. Thank you. Thank you.
A lecture given by John Speer, at the Adventures in the Physical Metallurgy of Steels (APMS) conference held in Cambridge University. The quench and partitioning process involves partial transformation to martensite, followed by an increase in temperature to permit the excess carbon to partition into the residual austenite. The quenching and partitioning (Q&P) concept was first introduced about a decade ago, to utilise carbon in as-quenched martensite to stabilise retained austenite and thereby enhance the mechanical properties. This presentation will provide an update of advancements made in understanding important aspects of physical metallurgy and microstructure development, within the authors laboratories and elsewhere, which have led to interest in Q&P as a potential route for producing commercial steels in volume. A variety of applications have been explored in Q&P laboratory investigations. Initial industrialisation has focused on automotive sheet steels, and substantial activity is now underway to meet aggressive near-term targets for vehicle lightweighting using Q&P steels or other novel approaches to generate microstructures with enhanced austenite fractions. The current status of some of these efforts will be reported.